AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
2h ago · 3 min read · The Free AI Stack: GLM-5.1 + Gemini + Claude Without Paying a Dollar Let me cook: 🔥 I run a production SaaS backend (multi-tenant SaaS platform) with three AI providers running in parallel. GLM-5.1 for coding, Gemini for research, Claude for complex...
Join discussion
7h ago · 6 min read · Introduction We needed a celebration effect. Confetti, stars, bubbles exploding across the entire screen. Hundreds of them, all at once, chaotic and fun. It went from smooth to stuttering to buttery a
Join discussion
Building, What Matters....
2 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthJADEx Developer
1 post this monthObsessed with crafting software.
2 posts this monthBuilding, What Matters....
2 posts this monthSr. Staff Software Engineer @ CentralReach - Working with MAUI / .NET / SQL Server / React
1 post this monthJADEx Developer
1 post this monthObsessed with crafting software.
2 posts this monthMost are still shipping “AI add-ons.” The real shift happens when the whole workflow disappears into one action — that’s when users actually feel the value.
The OWASP LLM risks become even more critical when you consider that AI coding agents now have shell access and can modify files directly. Prompt injection isn't just a chatbot problem anymore — it's a supply chain risk when an agent reads untrusted input (like a GitHub issue body) and executes code based on it. Two practical mitigations I've found effective: 1) Sandboxing agent execution so it can't access credentials or production systems, and 2) Using pre-commit hooks that scan for common patterns like hardcoded secrets or suspicious shell commands in AI-generated code. Claude Code's hook system supports this natively, which helps enforce security gates in the CI pipeline automatically.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Great breakdown of a decision most teams get wrong by defaulting to whatever's trending. The key insight people miss: BFF isn't an alternative to API Gateway — they solve different problems at different layers. API Gateway handles cross-cutting concerns (auth, rate limiting, routing) while BFF handles client-specific data shaping. You can absolutely run both. Where GraphQL fits depends on your team's query complexity — if your frontend needs to fetch deeply nested, variable-shape data across multiple domains, GraphQL shines. But if you're mostly doing CRUD with predictable payloads, a BFF with REST is simpler to cache, easier to debug, and doesn't require the schema stitching overhead. The real question should be: how many distinct clients are consuming your API? One client = REST is fine. Three+ clients with wildly different data needs = that's where BFF or GraphQL earns its complexity budget.
Great question—especially around making AI outputs feel intuitive. I think using progressive disclosure (simple insights first, deeper details on demand) can really help reduce overwhelm while still building trust. For visualizing predictions, small cues like confidence levels, colors, or tooltips can make a big difference without cluttering the UI. I’ve also seen tools like brat-generator-pink focusing on clean and simplified output, which is a useful direction for keeping things user-friendly.
I keep running into the same issue in almost every API project I work on: 👉 The API works 👉 The tests pass 👉 But the documentation is already outdated And the bigger the system gets (microservices,
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API retur...
one tool missing from most API doc discussions: MCP (Model Context Protocol) server definitions. if you're building APIs that AI agents cons...