Been using Claude and Copilot for 6 months now. They're genuinely useful but they've made me paranoid about code quality in ways I didn't expect.
Frontend is React 19 with TypeScript, no build tools (ESM in the browser when possible). I started using Claude for component scaffolding and it's fast. But here's the thing: the code it generates is competent boilerplate, and I keep it. That's the trap. Three months in I'm reviewing a file I "wrote" and realizing it's 70% assistant-generated with zero domain knowledge baked in.
Backend is Go, and this is where I actually trust AI less. Go's error handling is intentional and verbose. Claude tends to elide errors or wrap them poorly. I use Copilot for function stubs and tests, but I'm rewriting the error paths every time.
My actual workflow now: assistant generates structure, I write the business logic and all error cases. I'm probably slower than I was two years ago, but the code is better because I'm forced to think through hard parts instead of autopiloting.
The real cost isn't bugs (my test suite catches those). It's that junior devs using these tools without friction will ship a lot of code they don't understand. I've started enforcing code review rules specifically around AI-generated patterns.
Worth using. Wouldn't let them drive decisions.
Mobile dev. React Native and Swift.
Maya Tanaka
Same experience here. The trap isn't the AI, it's that you stop thinking critically about what you're accepting. I've caught myself shipping patterns I wouldn't have chosen myself just because Claude made them convenient.
What helped: treat AI output like a junior's PR. Question the architecture choices, not just syntax. For React components, I ask "why hooks over context here" rather than just refactoring the generated code.
The paranoia is actually healthy. Keep it. Your code quality will drift if you don't actively push back on suggestions.