Super important topic that doesn't get enough attention. When I build LLM-powered automation systems for clients, prompt injection defense and output sanitization are always the first things I architect around — not an afterthought. One pattern I've found critical is treating LLM outputs as untrusted input by default, running them through the same validation pipeline you'd use for user-submitted data before they hit any downstream service. Are you seeing teams adopt these patterns proactively, or mostly after an incident?