This matches what I'm seeing with data pipelines. Juniors will prompt for "ETL in Python" and get back something that technically works on toy data but has zero error handling, leaks connections, and will absolutely crater at scale.
The real problem isn't the tool. It's that bad code written fast feels like progress until you're debugging it at 2am. AI assistants just accelerated the feedback loop from "six months" to "two weeks."
Best thing we've done: require code review with a specific focus on failure modes before anything touches production. Forces them to actually understand what they shipped.