💼 Hiring Quest – Fullstack Engineer (Python + React) @ Nixai Labs
Challenge-based hiring quest with structured evaluation and real project outcomes.
Top performers get hired with a paid contract and the opportunity to work on real-world projects.
👋 We are Nixai Labs, an AI product company helping businesses transform how they work by building intelligent, user-centric tools powered by Large Language Models (LLMs) and automation frameworks.
Our mission is to bridge the gap between AI research and real-world products — fast, reliable, and scalable.
We’re hiring a Fullstack Engineer (3–5 YOE) to join our core team building next-generation AI modules and platforms.
🕓 Start Date:Immediate
🌍 Location: Remote
💰 Salary: 35,000 – 50,000 EGP
🛠️ How the Hiring Quest Works
1️⃣ Register for the quest
2️⃣ Receive the full challenge after registration closes
3️⃣ Submit your solution before the deadline
4️⃣ Top candidates are invited to a technical review session
5️⃣ One candidate will be hired
🔍 Who We’re Looking For
✅ 3–5 years of experience in Python + React (Next.js)
✅ Strong experience with FastAPI and RESTful APIs
✅ Comfortable working with LLM integrations (OpenAI, Anthropic, etc.)
✅ Familiar with LangChain / LlamaIndex and prompt-based pipelines
✅ Good understanding of frontend state management (React Query / Zustand / Redux)
✅ Skilled in async programming, modular architecture, and API-first development
✅ Solid grasp of SQL/NoSQL databases (PostgreSQL preferred)
💡 Bonus: Vector databases (Pinecone, Chroma), Docker, and API authentication (JWT, OAuth2)
🎯 Your Mission: “AI Knowledge Assistant Dashboard”
🧠 Business Context
Nixai Labs builds internal AI assistants that help teams automate workflows, summarize data, and answer domain-specific questions.
In this quest, you’ll build a mini version of an AI knowledge assistant with a backend powered by FastAPI + LangChain and a frontend built with Next.js.
📌 The Challenge
1️⃣ Step 1 – Knowledge Ingestion (Backend)
Create an API that:
Accepts multiple .txt or .pdf files via upload.
Extracts text content.
Splits and stores embeddings in a vector database (e.g., Chroma, FAISS, or Pinecone).
Use LangChain’s text splitters and embedding models (e.g., OpenAI embeddings).
Endpoints:
POST /api/upload → upload files and process them
GET /api/docs → list uploaded docs with metadata (name, chunks, embedding count)
2️⃣ Step 2 – Ask the Assistant (Backend)
Expose an endpoint:
POST /api/ask → { "question": "..." }
Uses LangChain Retriever + LLM Chain to:
Retrieve relevant chunks
Generate an answer using OpenAI API (or mock response if needed)
Return: { "answer": "...", "sources": [...] }
3️⃣ Step 3 – Web Dashboard (Frontend)
Build a simple dashboard using Next.js (App Router) that allows users to:
Upload documents
See processed docs and embeddings count
Ask questions via chat-like interface
Show model responses + sources
Use:
TailwindCSS or ChakraUI
React Query / SWR for data fetching
Minimal but clean UI and responsive layout
4️⃣ Step 4 – Authentication (Optional but Bonus)
Implement basic login/signup flow with JWT or NextAuth.
Users should see only their own uploaded docs and chat history.
🗄️ Suggested Tech Stack
Backend : FastAPI + LangChain
Frontend : Next.js 14 (React 19)
Database : PostgreSQL / SQLite
Vector DB: Chroma / FAISS / Pinecone
Auth: JWT / NextAuth
Infra: Docker Compose
Docs: Swagger + README.md
🧩 Example API Flow
User uploads a file → /api/upload
Server extracts text + embeddings → saves to DB
User asks a question → /api/ask
Backend retrieves top chunks → generates answer
Frontend displays the full Q&A flow
🎁 Bonus Points
✨ Docker Compose (API + DB + Vector Store)
✨ Persistent chat history per user
✨ Swagger + OpenAPI docs
✨ Retry logic & async background tasks
✨ Unit tests for retriever & routes
✨ “Source viewer” UI (click to expand retrieved docs)
🧰 What You Should Submit
📂 GitHub Repository with:
Organized code: /backend, /frontend, /docs
README.md with setup steps
docker-compose.yml for local run
Optional ARCHITECTURE.md with diagram
📹 10-Minute Video
🎥 3 min — Introduce yourself + two technical challenges you’ve solved
⚙️ 7 min — Demo your app (upload → ask → answer) and explain your architecture
📊 Evaluation Criteria
Code Quality & Structure 25%
AI Integration & Backend Logic 25%
Frontend Functionality & UX 20%
Database & Vector Store Design 15%
Documentation & Setup 10%
Reliability & Error Handling 5%
Bonus: Docker setup, Authentication, Chat History, Tests, or elegant UI polish ✨
📩 After Submission
Top candidates will be invited to a technical review session to discuss:
System architecture
AI integration design choices
How you would scale it for real production
👉 Final hiring decision within 5 business days after the review.