Hi everyone,
I was checking recently about AI security. Most articles warn you about the AI your users interact with. They don’t mention the AI tools you’re building with. I’ve used AI coding assistants to write code, generate documentation, and even learn cryptography fundamentals, all to deploy services in production. The OWASP Top 10 for LLM applications, updated after 2025, describes 10 risks that apply just as much to your internal AI toolchain as to the chatbot you’re shipping. The threat surface isn’t only in front of your users. It starts in your IDE.
I wrote more here: strategizeyourcareer.com/p/owasp-top-10-llm-ai-se…
I think sandboxing will always be the best mitigation. Even if scanning with pre-commit hooks, the agent writing those credentials in a commit means the agent has access to them. For many use cases, it shouldn't have access from the start :)
Thanks for your perspective!!
Ethan Frost
AI builder & open-source advocate. Curating the best AI tools, prompts, and skills at tokrepo.com
The OWASP LLM risks become even more critical when you consider that AI coding agents now have shell access and can modify files directly. Prompt injection isn't just a chatbot problem anymore — it's a supply chain risk when an agent reads untrusted input (like a GitHub issue body) and executes code based on it.
Two practical mitigations I've found effective: 1) Sandboxing agent execution so it can't access credentials or production systems, and 2) Using pre-commit hooks that scan for common patterns like hardcoded secrets or suspicious shell commands in AI-generated code. Claude Code's hook system supports this natively, which helps enforce security gates in the CI pipeline automatically.