Manual Testing Quest

Phase: Ended

Registration Deadline: August 25, 2024

Submission Deadline: September 1, 2024

Prizes

2500 EGP

1 Place

1000 EGP

2 Place

500 EGP

3 Place

Brief

Code-quests is a platform that helps businesses publish projects (called Quests) and ask a community of developers and designers to compete to build the best, highest-quality implementation or design.

We are inviting you to test our platform and help us improve it by identifying bugs, writing detailed test cases, and reporting your findings. This quest is perfect for testers who love diving into new products and making them more robust.

You will be invited to access our platform test environment after the end of the registration period 

Objective:

  • Conduct a thorough manual test of the platform.

  • Write detailed test cases for key functionalities.

  • Report any bugs you find, with steps to reproduce them.

Requirements:

Test Cases Creation:

  1. Explore code quest platform and identify critical user flows (e.g., user registration, login, logout, quest listing, quest view, quest registration, quest submission).

  2. Write clear and detailed test cases for these flows, including expected results.

  3. Consider different scenarios, including edge cases.

  4. Example Test Case

    1. Test Case ID: TC-001

    2. Title: User Registration

    3. Preconditions: The user should not be logged in.

    4. Steps to Execute:

      1. Navigate to the registration page.

      2. Fill in the required fields (Name, Email, Password).

        1. Click the "Register" button.

    5. Expected Result: The user should be successfully registered and redirected to the welcome page.

Bug Hunting:

  1. Execute your test cases and interact with the platform as a typical user.

  2. Identify and document any bugs or issues you encounter.

  3. For each bug, provide a detailed description, steps to reproduce, expected vs. actual behavior, and any relevant screenshots or logs.

  4. Classify the bug as one of the following Bug Types:

    1. Functional Bug:  Improper system behavior. Examples:

      1. A button that doesn't work 

      2. A link that leads to a 404 Page-not-found error 

    2. UI Bug: Layout issues (misalignment, overlapping, spacing), font and color issues, etc.

    3. UX / Usability Bug: User eXperience issues and enhancements of the current application

    4. Content Bug:  Grammar and/or spelling issues

  5. Example Bug Report Documentation:

    1. Bug ID: BUG001

    2. Title: Error Message on Registration Page When Submitting Empty Form

    3. Steps to Reproduce:

      1. Navigate to the registration page.

      2. Leave all fields empty and click the “Register” button.

    4. Expected Result:

      1. A clear error message should appear next to each empty field, guiding the user to fill them in.

    5. Actual Result:

      1. A generic “Error occurred” message appears without specifying the fields that need to be filled.

    6. Severity: Medium

    7. Priority: High

    8. Environment: Chrome v100.0, Windows 10

    9. Attachments: [Screenshot or video of bug]

    10. Bug Classification  

Reporting:

  • Submit your test cases, bug reports, and suggestions for improvement in a well-organized document.

  • Prioritize bugs based on their impact on user experience 

Acceptance Criteria

Thoroughness (30 points)

  1. Completeness of testing: How many different scenarios, features, and edge cases were explored?

  2. Coverage of key functionalities: Did the testing address both primary and secondary workflows of the application?

  3. Identification of unusual or corner cases.

Clarity and Structure of Test Cases (25 points)

  1. Organization: Are the test cases easy to navigate and understand?

  2. Detailed steps: Are the steps to execute the test cases clear and comprehensive?

  3. Use of standardized format: Did the participant use a consistent template for all test cases?

Clarity of Bug Reports (25 points)

  1. Detailed reproduction steps: Are the steps to reproduce the bugs clear and precise?

  2. Inclusion of expected vs. actual results: Does the report clearly outline what was expected vs. what happened?

  3. Severity rating: Is the severity level of each reported bug accurate and justified?

Critical Thinking and Problem Solving (15 points)

  1. Innovative approaches: Did the participant demonstrate creative testing methods?

  2. Priority of issues: Did they focus on high-impact areas first?

  3. Suggestions for improvement: Did the participant offer constructive feedback or suggestions based on their findings?

Overall Presentation and Submission (5 points)

Format and professionalism: Is the submission presented neatly, with a professional tone? 

The minimum acceptable score is 90 (90% of 100). First, Second, and Third place will be the highest score above 80. 

If two submissions earn the same score, the first submission will get the highest place. 

Making the world a better place through competitive crowdsourcing programming.