The cost dimension of RAG failures in production is something we see reflected in the pricing data and it is underappreciated. Most teams prototype RAG with a retrieval step that pulls generous context windows to make sure nothing gets missed. Then they hit production and realize that output tokens cost roughly 4x input tokens on average across the market right now. The "retrieve more to be safe" instinct that worked in testing becomes an expensive habit at scale. The systems that survive production are usually the ones that got ruthless about retrieval precision early, not because of latency but because of the inference bill. We track inference costs across 50+ vendors weekly at a7om.com if the numbers are useful.