I get the appeal. stricter null checks, noImplicitAny, all that. But I'm genuinely confused about when it actually prevents real bugs vs just making you type more.
I've got a fairly large codebase where we enabled it last quarter. yeah, caught some null reference stuff that maybe would've exploded in prod. but we also spent like two weeks adding | undefined everywhere and turning working code into type assertion soup.
const value = obj.field as string | undefined;
if (value) { doThing(value); }
vs just... not doing that and testing properly?
I'm not trying to be contrarian. I just want to understand the actual calculus here. Is the benefit really proportional to the cognitive overhead, or does it just feel like best practice? Like if my test coverage is already solid, am I optimizing for the wrong failure mode?
No responses yet.