LLM-Sentinel: Shield your AI calls from leaking secrets for FREE!
Sep 1, 2025 · 1 min read · Every day, API keys, tokens, emails, and DB URLs slip into prompts, logs, or demos. Once they hit the LLM, they’re out of your control. I built LLM-Sentinel, a privacy-first proxy that: Intercepts requests to OpenAI, Ollama, Claude, etc. Masks 50+ ...
Join discussion