This is solving a real pain point. I run AI agents on client codebases regularly and the orientation cost is brutal — agents burning through tokens just to understand project structure before doing any actual work. The 4k token index vs 500k raw context is a massive efficiency gain. The MCP server integration is particularly interesting since that's becoming the standard for tool-agent communication. Have you benchmarked how much this improves first-task accuracy compared to agents that do the full file-by-file scan?