honestly this is the thread i've been waiting for someone to start.
so i've been using Cursor + Claude for about 6 months now on our main product (Next.js app, nothing crazy). and yeah we're shipping AI generated code to production. not like yolo shipping it though, we have a process.
the way i think about it: i let the model handle the stuff where the intent IS the implementation. like if i need a form with validation, a CRUD endpoint, a util function to parse dates in some weird format... just describe it, review the output, done. that stuff used to be the most boring part of my day and the model gets it right 90% of the time.
where i draw the line is anything with actual business logic or state management that touches multiple parts of the app. tried letting Claude handle a checkout flow refactor once and it "worked" but made assumptions about our cart state that were subtly wrong. took me longer to find the bug than it would've taken to just write it myself.
for code review tbh we treat AI generated code the same as human code. if anything we review it MORE carefully because the model writes confident looking code that can be quietly wrong. like it won't throw errors, the tests will pass, but the logic has an edge case the model didn't consider because it doesn't know your users do weird things (our users definitely do weird things).
the biggest shift for me mentally was stopping to think of it as "the AI writes code for me" and more like "i have a really fast intern who needs very specific instructions and will never push back when something smells off." that framing helps me know when to use it and when to just open a file and type.