Enhancing Context Prompts for AI Coding Agent (س18)

Phase: Ended

Registration Deadline: February 28, 2025

Submission Deadline: March 15, 2025

Prizes

20000 EGP

1 Place

8000 EGP

2 Place

5000 EGP

3 Place

Seen-18 is an AI-powered autonomous coding agent that integrates seamlessly into your editor. It can:

  • Communicate in natural language (English & Arabic).

  • Read and write files directly in your workspace.

  • Execute terminal commands.

  • Automate browser actions.

  • Integrate with any OpenAI-compatible or custom API/model.

  • Handle various roles, including requirements gathering, coding, architecture, DevOps, and QA.

  • Monitor costs in real time—know exactly how much you're spending on prompts!

  • Detect errors from commands and linters, providing intelligent suggestions for fixes.

🔗 Repository & Extension: 

Objective of This Quest

1- Enhance Contextual Prompts: We need to improve how Seen-18 generates and interprets context prompts to ensure compatibility with all service providers

2- Dynamic File Inclusion: Develop a method to include only relevant files in the context prompt. This involves: Selecting files based on current user activity or project context and Prioritizing and sending only the most pertinent files to reduce API load and improve response relevance.

By refining these aspects, we can make Seen-18 more intelligent, resource-efficient, and adaptable across different AI ecosystems. 🚀

After registration, there will be a workshop to explain the quest and the key concepts needed. At the end of the first week, we will hold another workshop to address any questions you may have. Additionally, before the submission, we will conduct one more workshop to provide further clarifications and answer your questions.

 Seen-18 Flow 

Prompts Guidelines 

  • OpenAI (GPT-4, o3): General coding assistance, debugging, optimization
    Documentation

  • Anthropic (Claude 3.5 Sonnet): Safe, ethical coding, structured explanations
    Documentation

  • Google DeepMind (Gemini 2.0): Multimodal (code + visual debugging), real-time code analysis
    Documentation

  • Meta LLaMA 3: Open-source AI, optimized for research and AI explainability
    Documentation

  • Mistral (Mixtral 8x7B): High-speed, efficient AI for fast coding assistance
    Documentation

  • DeepSeek Coder: Advanced coding generation, mathematical reasoning, function optimization
    Documentation

Know More about Seen-18

https://youtu.be/jBlJg97DrSk?si=wuEuLJLIA1ZBaz5U 

https://youtu.be/uVn6MpDzf4c?si=WjrxFeld5LleP_uU 

https://youtu.be/W5UgF5H-eE8?si=Ak-hfLieKP6Tg4qw 

https://youtu.be/T1jriJ5gXiI?si=pQi-pn02nQbERbzj 

Evaluation Criteria 

Please read the Scorecard to see how the winners will be selected, and how we evaluate the best designs.

The minimum acceptable score is 80 (80% of 100). First  Place, The first submission will get the highest place if two submissions earn the same score. 

Note: If your score is below 80%, you will receive detailed feedback on your submission, and we will not utilize any part of your work.


Making the world a better place through competitive crowdsourcing programming.