Coding time dropped maybe 40-50% for boilerplate and scaffolding, but the interesting shift is where the time moved. I spend way less time writing code and way more time reviewing what AI generated, designing architecture, and handling edge cases the AI didn't anticipate.
For context, I'm building AnveVoice — a voice AI that takes real DOM actions on websites. The AI coding tools are incredible for generating the initial widget code, API integrations, and even the MCP tool layer (we have 46 tools via JSON-RPC 2.0). But when you're dealing with real-time voice processing at sub-700ms latency across 50+ languages, AI tools still struggle with the nuanced parts — race conditions in WebSocket connections, graceful degradation when speech recognition fails mid-sentence, accessibility edge cases across screen readers.
The error question is interesting too. Fewer syntax errors, but more subtle architectural errors that only surface in production. AI tools write code that "works" in isolation but doesn't compose well. I've found that the debugging cycle actually got longer for complex bugs because the AI-generated code is harder to reason about when it breaks.
Net positive for sure, but the real skill shift is from "writing code" to "reviewing and integrating code" — which honestly might be a harder skill.