The shift toward agentic workflows is incredible, but the infrastructure side is completely broken right now.
Coming from a 17-year background in cybersecurity, we naturally look at new tech through a security lens. Recently, we ran an anonymized scan across the landscape of open-source agent deployments—specifically focusing on OpenClaw.
What we found genuinely alarmed us: There are roughly 135,000 OpenClaw instances currently exposed to the public internet without proper authentication layers.
The Problem: Shipping "Raw" Endpoints Building the agent's logic is fun. Building the reverse proxy, setting up token routing, and writing rate-limiting middleware is not. Because AI deployment tooling is still in its infancy, developers are just shipping raw endpoints to get their MVPs out the door.
When you expose an agent without middleware, you aren't just risking a DDoS attack. You are risking:
Token Draining: Bad actors hitting your endpoint to drain your OpenAI/Anthropic API limits.
Behavioral Hijacking: We've successfully "guilt-tripped" unprotected agents via prompt injection, making them drop their system prompts or self-sabotage simply because they lack an input-sanitization layer.
The Manual Fix If you are deploying an agent today, you must wrap it. At a minimum, please make sure you:
Put the instance behind an Nginx or Traefik reverse proxy.
Implement API key auth at the header level so unverified requests bounce before they ever hit the LLM.
Set hard rate limits on token consumption per IP/User.
We are currently researching better automated ways to solve this, but in the meantime, how are you all handling your agent deployments? Are you writing custom middleware from scratch for every project?
We are seeing developers spend weeks building an amazing agent, only to wake up to a drained OpenAI API bill because they left the endpoint unprotected. It’s wild that we collectively accept this massive security gap just to ship an MVP a little faster
Dhruv Joshi
Tech Content Stretegist and Expert
WOW