Good breakdown. One thing I'd add: the biggest prompt improvement I've made isn't about wording — it's about knowing when to stop prompting and start reading the output critically.
Most developers over-invest in crafting the perfect prompt and under-invest in verifying what comes back. AI-generated code looks correct at a glance but often has subtle logic issues (wrong boundary conditions, inverted checks, missed null cases).
My approach: prompt for structure, then manually verify the decisions. The AI is great at scaffolding but unreliable at judgment calls. Once you internalize that distinction, your prompt quality matters less because you're not trusting the output blindly anyway.