LLM Agents
hard
~40 hours
Multi-Tool RAG Agent with Context Management
Build an LLM agent that orchestrates web search, code execution, and database queries. Implement context compaction to prevent quality degradation over long conversations.
Skills Demonstrated
ReAct loop implementation
Tool orchestration
Context window management
Streaming responses
Implementation Steps
- Define tool interfaces (search, code exec, SQL) with Pydantic schemas
- Implement ReAct loop with thought/action/observation cycle
- Add context compaction: summarize old tool results, keep last N steps
- Build streaming response handler with real-time tool status
- Add evaluation harness with multi-step task benchmarks
- Implement conversation memory with Redis-backed sliding window
Interview Relevance
Why this project matters for interviews
Demonstrates understanding of LLM agent architectures, attention budget management, and production-grade tool orchestration — top interview topics at Google DeepMind, Anthropic, and OpenAI.