The doomprompting pattern is familiar to anyone building production agents. The core insight that took me too long to learn: prompt adjustments have diminishing returns past a certain point.
What actually moves the needle:
Explicit contracts over implicit expectations - Define what success looks like in code, not prose. A schema validates output; a prompt hopes for it.
Failure taxonomies - The same prompt failing differently isn't randomness, it's a classification problem. Track which failure modes repeat and build guards for those specifically.
Scaffolded prompts - Long prompts work worse than short prompts + structured context. The agent should retrieve what it needs, not have everything thrown at it.
The shift from "make the agent understand" to "make the system catch the agent" is where reliable behavior actually emerges. Prompt engineering is part of it, but constraint design is the multiplier.