Thanks for this opportunity. I've read all your books, including some very old ones (like the C++ in Booch method) and they are all great. You influenced a lot my work and I'm really grateful for so many interesting ideas.
I wish I could ask one, but it will be a lot.
I've been working a lot with code metrics or any objective ways of measuring code quality based on static analysis. I've been getting some interesting data about this which I can discuss if you are interested. I've seen some suggested metrics, specially in Clean Code, about number of lines of code, number of arguments per method, coupling etc. and that's why a lot of my questions will be directed towards measuring code coverage and TDD (since there are loads of studies showing that TDD improves code quality).
- Do you believe it is possible to objectively measure code quality analyzing code structure (number of lines, cyclomatic complexity, afferent and efferent coupling, code coverage)?
- Do you believe good design can be measured objectively?
- I've seen a lot of discussion in C++ about dynamic polymorphism being a performance issue (since it disables a lot of opportunities for compiler inlining, and since adds a pointer per class, can increase cache misses). What are your thoughts on this based on games or any other CPU intensive applications?
- I know this is a curve ball, but, what do you think would be the ideal unit test code coverage (line and branch)? Why?
- Knowing ideal code coverage can be tricky and sometimes irrelevant, what would be the lower limit? For examples, is 10% of code coverage relevant?
- What would be the threshold when you start feeling improvement?
- I've heard a lot of people stating that you can achieve high coverage numbers (for example 80%) and still have a pretty bad code. What are you thoughts on that?