In a true TDD method, we end up not writing any code at all until all the test cases are written and are working.
Primarily if you are working on a significant feature that takes 3-4 days of coding, writing test cases for it consumes even more time.
How do people who practice TDD properly handle such situations?
"In a true TDD method, we end up not writing any code at all until all the test cases are written and are working"
What would such tests even be making assertions against?
Also the question is is not whether testing increases the amount of work "testing should be 50% of your work". It's about what happens when a large refactoring of the code base necessitates an equally large refactoring of your tests.
This can be a problem, and it used to happen to me more when I would come up with complex abstractions, class hierarchies, etc to model solutions. It's something I would just have to suck up.
Since I've turned to an increasingly functional style of programming (small single purpose functions composed into pipelines-- pure whenever possible), I find this happens less and less. This is because refactoring largely becomes a matter of recomposing the pipeline. I seldom have to go back to rewrite any existing unit tests.
Let’s make it clear: TDD can’t work effectively for small companies.
I worked for a big TelCo firm where this works very good. However, there are developer teams (many of them) and testing teams. Also there are architects, who document features thoroughly before anything is done.
After the architect work is done, the developers and testers begin to work separately. This way functional and system/integration tests are finished when the code is done, and actual testing can begin.
You may argue this is not TDD, but that’s the closest you can get without losing efficiency. In fact, testers can start working earlier and may finish earlier; and there is a possibility for developers to run partial function tests during the coding phase. You may note that there are no unit tests mentioned. Those are made by the developers, whenever they see fit. Testing teams don’t (and shouldn’t) care about implementation details.
There was a time when I was proud having a project with 100% code coverage. But having all features thoroughly tested in a gray box manner is more important than testing individual functions in the code.
You should consider tests to be half of your work. If you deliver only the main code of a feature, you delivered 50% of your work.
Once you start thinking like that (which is good for you as a professional), you'll start forcing yourself to always write tests. The more tests you write, the faster you'll become at writing them. Basically, you start reaping the full benefits of testing when it becomes a habit in your daily routine.
That said, I know this is very hard and time consuming. My advice to you is to start breaking down your tasks into smaller tasks (if possible), and estimate your activities taking the tests into consideration (e.g. time*2). So, if you think you have a 2-day task at hand and cannot break it down to smaller tasks, make it a 4-day to account for testing.
This may seem like drastic at first, to spend 50% of your time on tests, but on the long run, the tests pay for themselves in terms of saved time. Also, it's the only human way to ensure your code actually works. The alternative is manually testing everything, everyday (needless to say, this is not likely to happen).
So, to summarize: my advice to you is always write tests (or at the very least, try really hard) even if it takes a long time. At first you'll feel like "wasting" a lot of time, but in time you'll come to appreciate the tests you wrote and realize they save more time than they cost to write. Of course you have to get good at it. Remember: test code should be cared for as carefully as production code. They are both EQUALLY important parts of the feature you are delivering.
As Kent C. Dodds says I think it's better to get hands on the actual problem before the test. It's what you need after all.
When the feature seems to be working as you wanted, then start writing tests and fix errors that occurs. Next step is to refactor the code and clean it up, while the tests still works. You can call my approach STR for Sketch, Test, Refactoring - or some might say Sketch Test Driven.. STD :D
Focus should not be on the procedure, but on the problem. (Test and refactor are focus on future problems in this case, or to avoid them of course.)
Hi Syed Fazle Rahman! I honestly don't do TDD if I'm not sure how I want things to look/behave. I'll often experiment a lot and throw away a lot of code before I get an idea of how I want things to look.
That said, if I do know ahead of time what I want an API to look like then TDD works great. But yeah, while tests generally save time in the long run, they can really slow you down if you don't know what the end result should look and behave like. Embrace the experimentation and test it when you like it.
Dave Stewart
Web developer from London
Emil Moe
Senior Data Engineer
I prefer not to fully "TDD". That means, since I create websites, I might do simple tests such as checken URIs and ensure it's the right code (usually 200) and content-type. But I usually write it after I created the implementation, but before launch.