I am working for a fintech startup where we are heavy on unit tests. For a particular release, I spent an entire day writing various unit tests (over 100). My colleague, who is peer-reviewing my code, has suggested that I should write some more tests for some cases that are extremely unlikely to happen. You can say it's extremely rare for those cases to occur. The current test suite covers 99% of the commonly-occurring scenarios.
I admire my co-worker, but I don't agree with this particular approach. What do you think? Should I spend time writing more unit tests for cases that are extremely unlikely to happen? This will delay the release for another day.
They are already lot of good comments. But I'd like one more detail: 'extremely unlikely to happen' is very vague. Is it extremely unlikely to happen var a user, or extremely unlikely to happen in production. My point is that a 0.001% of chances that something happens will never happen when you test the app. But it will happen every day in production. As others mentioned, it's a tradeoff. But to be able to make the call on this tradeoff, you have to think in terms of production scale (how many times a day, or on average every hour, this path could happen), and not just when you test it just by yourself, as a user. With this data in mind, you'll have a far better position to evaluate the relevance of this test.
I'm not sure what I can say, it depends mostly on your company policy.
There are not a lot of situations where covering 100% of paths with tests is a desirable goal, even when money is involved (note that 100% of lines != 100% of paths).
In the end, it's about balancing the cost of writing the tests vs the cost of not writing the tests. The cost of writing the test is more obvious: your time, and extra time to run the test suite. But not writing it is harder:
You can't determine this exactly for individual tests of course, but the company should balance these factors and aim for a reasonable amount of test "coverage" (not a % of lines but a somewhat subjective fraction of reasonable cases).
I'd also suggest comparing to the coverage of other, equally critical code (in your company). If similar code is better tested, you should catch up. If other critical code is hardly tested at all, you'd be better of writing tests for that than adding edge cases for your code.
I'd add that I would personally love to write perfect code with near-100% coverage. If someone more important than me tells me to do it, I'll be happy to comply, despite doubts about profitability. Usually though, they tell me to do less of it because of time constraints.
As long as the critical aspects of the code are tested I wouldn't worry about it. There will always be edge cases that you've never considered until people start actually using the cod/product. I don't believe in 100% test coverage, unless as @j said it's something like money or similar involved.
is it about money? so the data accuracy needs to be as close to 100% as possible? if yes, don't ask this question write the tests.
if it's about something not that important like a chart or something where you don't need the best possible accuracy you can argue the delay.
Don't let your need to finish something interfere with the quality of your product.
1 day is nothing.
cedric simon
Web dev
I would say that what doesn't happen today, will happen in the future. If a part of your application is critical enough that you wrote 100 tests already, then you can add a few more edge case scenarios, it won't hurt. Those edge cases will happen in pro, sonner than you can imagine. 0.01% of probability when used by thousands of people will break the statistics lol.
If your deadline is too short, you need to argue on that matter, but if you can handle a few more cases, it's better.
Better to be ready than sorry.