The confident hallucination problem cuts deeper than most teams realize. Low token entropy is a feature, not a bug — the model learned to compress patterns into high-confidence outputs because that's what training optimizes for. The second-model verifier approach works, but it introduces its own failure mode: what happens when the verifier is confidently wrong? The real fix isn't just adding a second opinion — it's surfacing uncertainty as a first-class output. Models should be forced to say 'I don't know' with the same confidence they use to assert facts. Production systems that treat uncertainty as a signal, not noise, survive longer than those relying on consensus alone.