I am Robert C. Martin (Uncle Bob). Ask me anything.View other answers to this thread
Hi Robert, I have a question regarding integration testing.
Let's assume that we have a project where several classes/modules have been developed using TDD (possibly by different developers). This implies that communication between these classes has been mocked out, so there might be a bug/incompatibility hidden in their communication once we integrate them (e.g. class A calling foo() method of class B, but class B object is in a state where it does not expect foo() to be called).
Occasionally we have such bugs slipping to production in real projects. Even though we have integration tests where class A calls method foo() of B (so the glue/adapter code is covered by the tests), B is not in the erroneous state that will expose the bug.
So my question is, what is your approach on enumerating which integration tests to write? It seems that writing just enough tests to cover the glue code is not enough. On the other hand writing integration tests that verify the communication of objects in every possible state increases the number of tests exponentially. What's your advice?
I tend to write integration and unit tests for each feature to drive the implementation of user story from the outside in (i.e. BDD).
Next, the very goal of integration tests is exercising the expected assumptions of these connected components. The very goal of the unit test is exercising the expected assumptions of the individual component.
If either the integration and/or unit test is insufficient for the components or component under test, then it's definitely possible to receive errors when the code is exercised in a context in which it hasn't been tested.
In summary, one shouldn't expect an insufficient integration or unit test to catch errors when the code is being exercised outside the context for which has been tested.
Conrad Taylor, I completely agree with your points.
My concern is that we do not have a structured way/methodology for writing sufficient integration tests. For unit tests we have TDD, which by definition leads to a test suite that covers the whole unit (there is also mutation testing). But for integration tests we do not have a similar methodology.
I also follow a test-first approach when writing integration tests. That way the glue code that connects the units is not written until there is a failing integration test. But as I explained in the original question, these tests do not seem to be enough.
One way of mitigating the problem is to minimize the chance of developing incompatible units by pair/mob programming. Still this leaves room for bugs slipping through due to the human factor involved. Another way is to write tests that exercise the contract of units developed by different developers/teams (as Robert Martin proposed), but this would require lot more tests especially if done for each unit.