Here’s the key: after pretraining, the model’s latent space is frozen. Post-training and prompting don't change this landscape; they change how we navigate it. Iterating over prompts, for ICL or any task, is the process of finding the optimal starting point and pathway through that fixed space. We're not changing the distribution's mass; we're just learning to traverse it more effectively to reach better results. Related articles: [1] https://ai-cosmos.hashnode.dev/understanding-ai-in-2025-its-still-all-about-the-next-token [2] https://ai-cosmos.hashnode.dev/the-ai-trinity-what-everyone-gets-wrong-about-modern-ai-systems [3] https://ai-cosmos.hashnode.dev/the-attention-bottleneck-ai-failure-modes-explained
