It's time to ditch Medium for good! 🌈⚡️

Introducing Devblog by Hashnode. Blog on your domain for FREE. Highly customizable and optimized for developers.

Learn more

I am Robert C. Martin (Uncle Bob). Ask me anything.

Started on January 24, 2019 8:00 PM

Hey there, 👋

Most of you might know me as Robert "Uncle Bob" Martin from Cleancoder.

I am a software professional since 1970. I have spent the last 40 years contributing toward the betterment of the field, from kickstarting the Agile Alliance to authoring landmark books about Agile Programming, Clean Code, and more.

This is your opportunity to ask me anything regarding programming on Hashnode. I will start answering them live from 2 pm CST on 24th January.

Ask a Question

125 discussions

Hey Robert! Thanks for the AMA! 🙌

Which of the current trends in Software Engineering are overrated? What practices in SE has changed for good since 1970?

Overrated? At the moment, Microservices.

Changes for good since 1970? Agile. No question. Agile is the acknowledgement that discipline overrides ceremony in software development.

Reply to this…

Share your programming knowledge and learn from the best developers on Hashnode

Get started

Hey Robert,

I read your book Clean Code and listen to some of your talks like 'the future of programming' and 'the principles of clean architecture' which I enjoyed.

To my questions:

  • Do you still actively code?
  • What Language Families do you like, Wirth? C-Like? ML? Fortran?
  • Do you still got the time and the nerve to follow current trends within the language ecco-systems?

Thx for your time :)

Show all replies

David Domingo you need to tag him so he gets a notification :)

Reply to this…

Thanks for this opportunity. I've read all your books, including some very old ones (like the C++ in Booch method) and they are all great. You influenced a lot my work and I'm really grateful for so many interesting ideas.

I wish I could ask one, but it will be a lot.

I've been working a lot with code metrics or any objective ways of measuring code quality based on static analysis. I've been getting some interesting data about this which I can discuss if you are interested. I've seen some suggested metrics, specially in Clean Code, about number of lines of code, number of arguments per method, coupling etc. and that's why a lot of my questions will be directed towards measuring code coverage and TDD (since there are loads of studies showing that TDD improves code quality).

  • Do you believe it is possible to objectively measure code quality analyzing code structure (number of lines, cyclomatic complexity, afferent and efferent coupling, code coverage)?
  • Do you believe good design can be measured objectively?
  • I've seen a lot of discussion in C++ about dynamic polymorphism being a performance issue (since it disables a lot of opportunities for compiler inlining, and since adds a pointer per class, can increase cache misses). What are your thoughts on this based on games or any other CPU intensive applications?
  • I know this is a curve ball, but, what do you think would be the ideal unit test code coverage (line and branch)? Why?
  • Knowing ideal code coverage can be tricky and sometimes irrelevant, what would be the lower limit? For examples, is 10% of code coverage relevant?
  • What would be the threshold when you start feeling improvement?
  • I've heard a lot of people stating that you can achieve high coverage numbers (for example 80%) and still have a pretty bad code. What are you thoughts on that?
  • Do you believe it is possible to objectively measure code quality analyzing code structure (number of lines, cyclomatic complexity, afferent and efferent coupling, code coverage)?

    You can get some information from these metrics. But you cannot determine code or design quality from them.

  • Do you believe good design can be measured objectively?

    Not with static analysis metrics. But I think good designers can pass appropriate judgements on good or bad design. In the end, there is only one metric that matters: Was the manpower required to build, deploy, and maintain the system appropriately minimized.

  • I've seen a lot of discussion in C++ about dynamic polymorphism being a performance issue (since it disables a lot of opportunities for compiler inlining, and since adds a pointer per class, can increase cache misses). What are your thoughts on this based on games or any other CPU intensive applications?

    When you need to save 3 nanoseconds, dynamic polymorphism is a factor. So in deep loops with tight real time deadlines, removing dynamic polymorphism is an option. In all other cases the cost is so small as to be nonexistent; and the advantages are so great as to be overriding.

  • I know this is a curve ball, but, what do you think would be the ideal unit test code coverage (line and branch)? Why?

    100%. Line and branch. Obviously. Of course you can probably never achieve this level; but it should still be the "ideal" (your word).

  • Knowing ideal code coverage can be tricky and sometimes irrelevant, what would be the lower limit? For examples, is 10% of code coverage relevant?

    Lower than 100% means you aren't done. Of course you 'll never really be done. But there is no stopping point lower than 100%.

  • What would be the threshold when you start feeling improvement?
    I've heard a lot of people stating that you can achieve high coverage numbers (for example 80%) and still have a pretty bad code. What are you thoughts on that?

    If you have 80% coverage, it means that 20% of the code might not work. That's not a good number. No one should be happy about 80% coverage.

Reply to this…

What do you think about Go (golang) language? It's structs, embedding/composition, no inheritance, interfaces? Error as a value? Concurrency model? Do you think those decisions made in the golang design are actually smart and make sence for the problem they are trying to solve by creating and maintaining golang? (generics will be released soon)

Reply to this…

Hi Robert, I have a question regarding integration testing.

Let's assume that we have a project where several classes/modules have been developed using TDD (possibly by different developers). This implies that communication between these classes has been mocked out, so there might be a bug/incompatibility hidden in their communication once we integrate them (e.g. class A calling foo() method of class B, but class B object is in a state where it does not expect foo() to be called).

Occasionally we have such bugs slipping to production in real projects. Even though we have integration tests where class A calls method foo() of B (so the glue/adapter code is covered by the tests), B is not in the erroneous state that will expose the bug.

So my question is, what is your approach on enumerating which integration tests to write? It seems that writing just enough tests to cover the glue code is not enough. On the other hand writing integration tests that verify the communication of objects in every possible state increases the number of tests exponentially. What's your advice?

Show all replies

Conrad Taylor, I completely agree with your points.

My concern is that we do not have a structured way/methodology for writing sufficient integration tests. For unit tests we have TDD, which by definition leads to a test suite that covers the whole unit (there is also mutation testing). But for integration tests we do not have a similar methodology.

I also follow a test-first approach when writing integration tests. That way the glue code that connects the units is not written until there is a failing integration test. But as I explained in the original question, these tests do not seem to be enough.

One way of mitigating the problem is to minimize the chance of developing incompatible units by pair/mob programming. Still this leaves room for bugs slipping through due to the human factor involved. Another way is to write tests that exercise the contract of units developed by different developers/teams (as Robert Martin proposed), but this would require lot more tests especially if done for each unit.

Reply to this…

Load more responses