Solid breakdown, especially the part about not training from scratch. One concept I'd add to any AI engineer's mental model in 2026: model disagreement as signal.
Most curricula teach you to pick "the best model" for a task. But once you're shipping AI features to real users, the harder skill is detecting when your single model is confidently wrong. I've started running the same prompt through 2-3 different model families (different training lineages, not different sizes of the same family) and using their disagreement as a quality signal. When they all agree, that's normal. When they split, that's where engineering judgment pays off.
Curious if you've integrated multi-model verification into your own AI engineer learning path, or if it's still mostly single-model fluency in 2026 curricula?