Manual Testing quest : Uncover Bugs and Improve the Code Quest Platform!

Phase: Ended

Registration Deadline: January 28, 2025

Submission Deadline: February 1, 2025

Prizes

2500 EGP

1 Place

1000 EGP

2 Place

500 EGP

3 Place

Code-quests is a platform that helps businesses publish projects (called Quests) and ask a community of developers and designers to compete to build the best, highest-quality implementation or design.We invite you to test the latest front-end improvements on the CodeQuests dashboard and help us enhance it by identifying bugs, writing detailed test cases, and sharing your findings. This quest is perfect for testers passionate about exploring new products and making them stronger.

Once the registration period ends, you’ll gain access to our platform's test environment along with all the necessary documentation.

Objective:

  1. Conduct a thorough manual test of the platform.

  2. Write detailed test cases for key functionalities.

  3. Report any bugs you find, with steps to reproduce them.

  4. Run regression testing on existing feature

Requirements:

Test Cases Creation:

  1. Explore the newly added features.

  2. Write clear and detailed test cases for these flows, including expected results.

  3. Consider different scenarios, including edge cases.

  4. Test cases template Reference 

Bug Hunting:

  1. Execute your test cases and interact with the platform as a typical user.

  2. Identify and document any bugs or issues you encounter.

  3. Bug report Reference 

FInal Reporting Status:

  1. Reference for Reporting Status 

Acceptance Criteria

Thoroughness (30 points)

  1. Completeness of testing: How many different scenarios, features, and edge cases were explored?

  2. Coverage of key functionalities: Did the testing address both primary and secondary workflows of the application?

  3. Identification of unusual or corner cases.

Clarity and Structure of Test Cases (25 points)

  1. Organization: Are the test cases easy to navigate and understand?

  2. Detailed steps: Are the steps to execute the test cases clear and comprehensive?

  3. Use of standardized format: Did the participant use a consistent template for all test cases?

Clarity of Bug Reports (25 points)

  1. Detailed reproduction steps: Are the steps to reproduce the bugs clear and precise?

  2. Inclusion of expected vs. actual results: Does the report clearly outline what was expected vs. what happened?

  3. Severity rating: Is the severity level of each reported bug accurate and justified?

Critical Thinking and Problem Solving (15 points)

  1. Innovative approaches: Did the participant demonstrate creative testing methods?

  2. Priority of issues: Did they focus on high-impact areas first?

  3. Suggestions for improvement: Did the participant offer constructive feedback or suggestions based on their findings?

Overall Presentation and Submission (5 points)

  1. Format and professionalism: Is the submission presented neatly, with a professional tone?

The minimum acceptable score is 90 (90% of 100). First, Second, and Third place will be the highest score above 80.

If two submissions earn the same score, the first submission will get the highest place.

Making the world a better place through competitive crowdsourcing programming.