This resonates deeply. We hit the same wall building automation workflows — a single monolithic prompt trying to handle every edge case becomes brittle fast. The shift to composable skills is essentially the same principle as microservices vs monoliths, but for LLM orchestration. One pattern that's worked well: treating each skill as a self-contained unit with its own context window budget, clear input/output contracts, and fallback behavior. Instead of one 8k-token mega-prompt, you get five 1k-token focused skills that the orchestrator chains based on intent classification. The debugging story also improves dramatically — when something breaks, you know exactly which skill misfired instead of hunting through a wall of instructions.