Skip to content

Discover your next Milestone.

Choose from industry-vetted challenges. Build local, push to GitHub, and earn cryptographic proof of your engineering skills.

LLM Observability & Tracing System

backendAdvanced365d access
199onwards

Log every LLM call with latency, token usage, and model output. Build a query layer surfacing slow calls, expensive prompts, error rates, and cost trends over time.

  • Instrument every LLM call with latency, token usage, and cost tracking
  • Group related LLM calls into traces for end-to-end session visibility
  • Write analytical SQL queries for performance monitoring (slow calls, cost-by-model)
  • Build a live observability dashboard using Chart.js

Multi-Agent Orchestration Backend

backendAdvanced365d access
199onwards

Coordinate multiple specialized AI agents — planner, researcher, writer — passing context and managing state between them. Return a streamed unified result to the client.

  • Design a multi-agent architecture with clearly defined agent roles
  • Implement a stateful agent graph using LangGraph
  • Stream intermediate agent progress to clients using WebSockets
  • Implement per-agent timeouts and graceful fallback strategies

Async AI Job Queue

backendIntermediate365d access
149onwards

Build a queue where users submit long AI tasks — document analysis, batch summarization — and poll for results. Handle failures, retries, dead letters, and status webhooks.

  • Understand the job queue pattern and when to use async processing
  • Set up and connect Celery with Redis as a message broker
  • Build APIs for job submission, status polling, and result retrieval
  • Implement automatic retries with exponential backoff for failed tasks

Tool Server for AI Agents

backendIntermediate365d access
149onwards

Expose web search, code execution, and calculator as standardized tools via a REST API. Connect it to an agent and watch it call your tools autonomously.

  • Design a standardized tool API that exposes capabilities to AI agents
  • Implement real tool functions: web search, calculation, and datetime
  • Understand the JSON Schema format for describing tool inputs
  • Connect a LangChain agent to external tools via a REST API

Rate-limited LLM API Gateway

backendIntermediate365d access
149onwards

Build a gateway that sits in front of any LLM API and enforces per-user token-bucket rate limits. Essential infrastructure for every production AI product.

  • Understand the token bucket algorithm and when to use it over other rate limiting approaches
  • Implement per-user token bucket rate limiting using Redis atomic operations
  • Rate limit by LLM token consumption, not just request count
  • Write load tests using Locust to verify rate limiting under concurrent traffic

AI Webhook Processor

backendBeginner365d access
99onwards

Receive webhook payloads, run them through an AI summarizer pipeline, and store structured results. Your first event-driven AI data pipeline.

  • Build and expose a webhook receiver that handles inbound HTTP POST payloads
  • Expose a local server to the internet using ngrok for testing
  • Build an event-driven AI pipeline: receive data → process with LLM → store result
  • Extract structured JSON output from LLM responses reliably

Streaming Chat API Endpoint

backendBeginner365d access
99onwards

Build the server-side of a streaming chat — an SSE endpoint that proxies LLM chunks to the client in real time. Learn async generators, backpressure, and stream piping.

  • Understand the Server-Sent Events (SSE) protocol and its use cases
  • Build a streaming endpoint that proxies LLM response chunks in real time
  • Use Python async generators or Node.js streams for efficient chunk forwarding
  • Detect client disconnections and cancel upstream requests to avoid waste

LLM Proxy API

backendBeginner365d access
99onwards

Build an API wrapper around OpenAI — add request logging, API key auth, response caching, and basic error handling. Your first AI-aware backend service.

  • Build a reverse proxy API that wraps a third-party LLM service
  • Implement API key authentication with middleware
  • Log structured request data (latency, tokens, model) to a database
  • Cache identical LLM requests using SHA-256 hashed keys in Redis

URL Shortener Service

backendFoundation365d access
99onwards

Build a URL shortener — generate short codes, redirect to original URLs, track click analytics, and enforce rate limits per IP. Compact project, dense backend concepts.

  • Build a URL shortening service with unique code generation
  • Implement Redis caching to reduce database load on hot lookup paths
  • Track analytics data (click counts) without slowing down the critical path
  • Apply per-IP rate limiting using Redis counters and TTLs

Blog Platform API with Auth

backendFoundation365d access
99onwards

Build a blog backend with JWT-based user authentication, post creation, tagging, and pagination. Learn auth patterns and API design that carry into every backend project.

  • Implement JWT-based authentication: registration, login, and token verification
  • Hash passwords securely using bcrypt
  • Protect routes with authentication middleware
  • Enforce ownership rules: users can only modify their own resources

Student Management System API

backendFoundation365d access
99onwards

Build a full REST API for managing students, courses, and enrollments — full CRUD, relational data, and clean endpoint design. The classic backend project, done properly.

  • Design relational database schemas with foreign keys and join tables
  • Build a complete REST API with all standard HTTP methods and status codes
  • Implement input validation and meaningful error responses
  • Test API endpoints systematically using Postman or Thunder Client
12k+
Verified Developers
150+
Active Projects
450+
Companies Hiring
14 Days
Avg. Completion

Got questions?

Every challenge includes detailed documentation, technical constraints, and automated evaluation scripts to ensure you have everything you need to succeed.