Hey there, 👋
Most of you might know me as Robert "Uncle Bob" Martin from Cleancoder.
I am a software professional since 1970. I have spent the last 40 years contributing toward the betterment of the field, from kickstarting the Agile Alliance to authoring landmark books about Agile Programming, Clean Code, and more.
This is your opportunity to ask me anything regarding programming on Hashnode. I will start answering them live from 2 pm CST on 24th January.
Hashnode is a friendly and inclusive dev community.
Come jump on the bandwagon!
💬 Ask programming questions without being judged
🧠 Stay in the loop and grow your knowledge
🍕 More than 500K developers share programming wisdom here
❤️ Support the growing dev community!
I read your book Clean Code and listen to some of your talks like 'the future of programming' and 'the principles of clean architecture' which I enjoyed.
To my questions:
- Do you still actively code?
- What Language Families do you like, Wirth? C-Like? ML? Fortran?
- Do you still got the time and the nerve to follow current trends within the language ecco-systems?
Thx for your time :)
Thanks for this opportunity. I've read all your books, including some very old ones (like the C++ in Booch method) and they are all great. You influenced a lot my work and I'm really grateful for so many interesting ideas.
I wish I could ask one, but it will be a lot.
I've been working a lot with code metrics or any objective ways of measuring code quality based on static analysis. I've been getting some interesting data about this which I can discuss if you are interested. I've seen some suggested metrics, specially in Clean Code, about number of lines of code, number of arguments per method, coupling etc. and that's why a lot of my questions will be directed towards measuring code coverage and TDD (since there are loads of studies showing that TDD improves code quality).
- Do you believe it is possible to objectively measure code quality analyzing code structure (number of lines, cyclomatic complexity, afferent and efferent coupling, code coverage)?
- Do you believe good design can be measured objectively?
- I've seen a lot of discussion in C++ about dynamic polymorphism being a performance issue (since it disables a lot of opportunities for compiler inlining, and since adds a pointer per class, can increase cache misses). What are your thoughts on this based on games or any other CPU intensive applications?
- I know this is a curve ball, but, what do you think would be the ideal unit test code coverage (line and branch)? Why?
- Knowing ideal code coverage can be tricky and sometimes irrelevant, what would be the lower limit? For examples, is 10% of code coverage relevant?
- What would be the threshold when you start feeling improvement?
- I've heard a lot of people stating that you can achieve high coverage numbers (for example 80%) and still have a pretty bad code. What are you thoughts on that?
Hi Robert, I have a question regarding integration testing.
Let's assume that we have a project where several classes/modules have been developed using TDD (possibly by different developers). This implies that communication between these classes has been mocked out, so there might be a bug/incompatibility hidden in their communication once we integrate them (e.g. class A calling foo() method of class B, but class B object is in a state where it does not expect foo() to be called).
Occasionally we have such bugs slipping to production in real projects. Even though we have integration tests where class A calls method foo() of B (so the glue/adapter code is covered by the tests), B is not in the erroneous state that will expose the bug.
So my question is, what is your approach on enumerating which integration tests to write? It seems that writing just enough tests to cover the glue code is not enough. On the other hand writing integration tests that verify the communication of objects in every possible state increases the number of tests exponentially. What's your advice?
Hey Uncle Bob, thanks for doing this one. I have a question regarding TDD and how it fits into planning and estimation.
I'm working at a company that values technical task breakdown and somewhat accurate estimations. By practicing this, we usually get the design and separation of major components nailed down before the development begins. Now, this doesn't work very well with TDD as I already kind of have my design laid out.
How would you approach this, and where do you think is the fine line between planning and estimating and letting tests drive the design?