Rafael dos Santos Miguel Filho That’s a good question. I think it’s important that tests are useful but not brittle. I know that some people strive for 100% coverage of every scenario. Not only does this consume a lot of time in writing (e.g. testing every getter/setter), it also makes requirements changes very difficult to implement. There’s always a balance that can be difficult to achieve.
For my test cases, I’ll usually start off with the happy case. I’ll also study the code, looking for subtle cases where things can go wrong, and try to test those ‘bad’ cases. This can be things like expecting exceptions to be thrown, or that the appropriate response is given if a process fails. To avoid test code duplication, I’ll often use TestCases (in NUnit) with various inputs. If I’m expecting some failures, I’ll also parameterise the test for expected output; I can then assert that the output is as expected.
Functional method calls are best for this as the method will have a return value to assert against. If it can be made idempotent, even better; I can run (and re-run) the method multiple times in the debugger if necessary.
If the test is for making sure that multiple services get called with the expected inputs, one way I might do this is mock the service. I’ll do this in a way that the mock takes the arguments for a method call and stores them in a local test variable. In this way, there’s also something to assert against. If using Moq (and probably other testing libraries), it’s also possible to check its invocations. This can sometimes be trickier though as it might be called by things that aren’t being tested. Specifying e.g. that the fourth invocation should be a specific value can lead to a brittle test as it’s relying on a specific invocation.
If the method I’m testing is functional and relies of the output of other services, I won’t always verify that the service mocks have been called – it’s implicit that they haven’t been called if the final output is wrong.