AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
18h ago · 12 min read · AI agents are moving beyond demos and into real production use. In production, you need sessions that last through infrastructure changes, code that stays secure, and controls that your platform team
Join discussion
1h ago · 4 min read · Every AI agent system you've seen has the same invisible problem. The skills are frozen. From the moment you deploy, the way your agent handles a complex workflow, the tool-call sequences it knows, the failure modes it avoids, all of it is locked in ...
Join discussion
56m ago · 4 min read · High-volume messaging exposes routing behavior, queueing, latency and execution paths most APIs hide. Everything works. Requests return 200.Messages get accepted.Delivery looks consistent. Until it d
Join discussionCEO @ United Codes
1 post this monthBuilding backend systems. Occasionally understanding why they work.
1 post this monthObsessed with crafting software.
7 posts this monthOracle APEX, PLSQL, SQL Developer
1 post this monthCEO @ United Codes
1 post this monthBuilding backend systems. Occasionally understanding why they work.
1 post this monthObsessed with crafting software.
7 posts this monthOracle APEX, PLSQL, SQL Developer
1 post this monthCompletely agree, most failures I’ve seen come from poor context management and unclear data flow, not the model itself. State handling also becomes a major issue when workflows scale, especially with multiple tools and agents interacting. In my experience, debugging improves a lot once you treat it as a system design problem rather than just an AI model issue.
Hmm, I think AI tools are actually pretty helpful, but you still have to double-check everything — they’re not perfect 🙂
Most companies haven't answered a basic question yet: who is accountable when an AI agent takes an action? Until that's resolved, they'll keep defaulting to safe, surface-level AI features instead of truly rethinking workflows. The bottleneck isn't the technology; it's the accountability layer nobody wants to own.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Nice first deployment walkthrough! One thing worth adding to this stack: set up an OAI (Origin Access Identity) or the newer OAC (Origin Access Control) so your S3 bucket stays fully private and only CloudFront can read from it. Without that, the bucket is publicly accessible even though CloudFront is in front. Also, consider adding a Cache-Control header strategy — setting immutable assets to max-age=31536000 with content hashing in filenames, and your index.html to no-cache so CloudFront always checks for the latest version. WAF is a solid move this early — most people skip it until they get hit with bot traffic.
I keep seeing people blame the model when something breaks. In most cases, that’s not where the problem is. From what I’ve seen, things usually fail somewhere else: agents pulling in too much or wron
Agree. This is very close to what I’ve seen while building Origin. Once you connect AI to tools, files, and workspace state, it becomes much...
100% agree — this matches what I see building automation systems for clients daily. The model is usually the most reliable part of the stack...