This is a well-balanced perspective. The trade-off between normalization and denormalization is rarely about raw performance alone it’s about long-term complexity. Your experience highlights that premature denormalization shifts the burden from query optimization to operational and cognitive overhead, especially around migrations, consistency, and write amplification. Measuring first, identifying real bottlenecks, and then selectively denormalizing around proven hot paths is a disciplined approach that protects both performance and maintainability. The reminder that the true cost is complexity not disk or CPU is an important one for architectural decisions.