Context Is All You Have: How LLM Attention Actually Works
You've seen the marketing: "128k context window!" "1 million tokens!" But what does that actually mean for your use case? And why does your chatbot still forget what you said 20 messages ago?
This is the first post in a series on LLM internals — no h...
cumulusai.hashnode.dev5 min read