💼 Hiring Quest – Fullstack Engineer (Python + React) @ Nixai Labs

Phase: Submission

Registration Deadline: November 24, 2025

Submission Deadline: December 2, 2025

Prizes

You get hired with paid contract and the opportunity to work on real-world .

👋 We are Nixai Labs, an AI product company helping businesses transform how they work by building intelligent, user-centric tools powered by Large Language Models (LLMs) and automation frameworks.
Our mission is to bridge the gap between
AI research and real-world products — fast, reliable, and scalable.

We’re hiring a Fullstack Engineer (3–5 YOE) to join our core team building next-generation AI modules and platforms.

🕓 Start Date:Immediate
🌍
Location: Remote
💰
Salary: 35,000 – 50,000 EGP


🛠️ How the Hiring Quest Works

 1️⃣ Register for the quest
2️⃣ Receive the full challenge after registration closes
3️⃣ Submit your solution before the deadline
4️⃣ Top candidates are invited to a technical review session
5️⃣ One candidate will be hired


🔍 Who We’re Looking For

✅ 3–5 years of experience in Python + React (Next.js)
✅ Strong experience with FastAPI and RESTful APIs
✅ Comfortable working with
LLM integrations (OpenAI, Anthropic, etc.)
✅ Familiar with
LangChain / LlamaIndex and prompt-based pipelines
✅ Good understanding of
frontend state management (React Query / Zustand / Redux)
✅ Skilled in
async programming, modular architecture, and API-first development
✅ Solid grasp of SQL/NoSQL databases (PostgreSQL preferred)
💡
Bonus: Vector databases (Pinecone, Chroma), Docker, and API authentication (JWT, OAuth2)


🎯 Your Mission: “AI Knowledge Assistant Dashboard”

🧠 Business Context

Nixai Labs builds internal AI assistants that help teams automate workflows, summarize data, and answer domain-specific questions.

In this quest, you’ll build a mini version of an AI knowledge assistant with a backend powered by FastAPI + LangChain and a frontend built with Next.js.

📌 The Challenge

1️⃣ Step 1 – Knowledge Ingestion (Backend)

  1. Create an API that:

  2. Accepts multiple .txt or .pdf files via upload.

  3. Extracts text content.

  4. Splits and stores embeddings in a vector database (e.g., Chroma, FAISS, or Pinecone).

  5. Use LangChain’s text splitters and embedding models (e.g., OpenAI embeddings).

Endpoints:

  1. POST /api/upload → upload files and process them

  2. GET /api/docs → list uploaded docs with metadata (name, chunks, embedding count)


2️⃣ Step 2 – Ask the Assistant (Backend)

Expose an endpoint:

  1. POST /api/ask{ "question": "..." }

  2. Uses LangChain Retriever + LLM Chain to:

    1. Retrieve relevant chunks

    2. Generate an answer using OpenAI API (or mock response if needed)

  3. Return: { "answer": "...", "sources": [...] }


3️⃣ Step 3 – Web Dashboard (Frontend)

Build a simple dashboard using Next.js (App Router) that allows users to:

  1. Upload documents

  2. See processed docs and embeddings count

  3. Ask questions via chat-like interface

  4. Show model responses + sources

Use:

  1. TailwindCSS or ChakraUI

  2. React Query / SWR for data fetching

  3. Minimal but clean UI and responsive layout


4️⃣ Step 4 – Authentication (Optional but Bonus)

Implement basic login/signup flow with JWT or NextAuth.
Users should see only their own uploaded docs and chat history.


🗄️ Suggested Tech Stack

  1. Backend : FastAPI + LangChain

  2. Frontend : Next.js 14 (React 19)

  3. Database : PostgreSQL / SQLite

  4. Vector DB: Chroma / FAISS / Pinecone

  5. Auth: JWT / NextAuth

  6. Infra: Docker Compose

  7. Docs: Swagger + README.md


🧩 Example API Flow

  1. User uploads a file → /api/upload

  2. Server extracts text + embeddings → saves to DB

  3. User asks a question → /api/ask

  4. Backend retrieves top chunks → generates answer

  5. Frontend displays the full Q&A flow


🎁 Bonus Points

  1. ✨ Docker Compose (API + DB + Vector Store)

  2. ✨ Persistent chat history per user

  3. ✨ Swagger + OpenAPI docs

  4. ✨ Retry logic & async background tasks

  5. ✨ Unit tests for retriever & routes

  6. ✨ “Source viewer” UI (click to expand retrieved docs)


🧰 What You Should Submit

📂 GitHub Repository with:

  1. Organized code: /backend, /frontend, /docs

  2. README.md with setup steps

  3. docker-compose.yml for local run

  4. Optional ARCHITECTURE.md with diagram

  5. 📹 10-Minute Video

    1. 🎥 3 min — Introduce yourself + two technical challenges you’ve solved

    2. ⚙️ 7 min — Demo your app (upload → ask → answer) and explain your architecture


📊 Evaluation Criteria

  1. Code Quality & Structure 25%

  2. AI Integration & Backend Logic 25%

  3. Frontend Functionality & UX 20%

  4. Database & Vector Store Design 15%

  5. Documentation & Setup 10%

  6. Reliability & Error Handling 5%

  7. Bonus: Docker setup, Authentication, Chat History, Tests, or elegant UI polish ✨


📩 After Submission

Top candidates will be invited to a technical review session to discuss:

  1. System architecture

  2. AI integration design choices

  3. How you would scale it for real production

👉 Final hiring decision within 5 business days after the review.

C Q For Digital Solution Trading as Code Quests
Making the world a better place through competitive crowdsourcing programming.