Thanks Chris Chapman for this great blog article!
In your Quant UX book, you recommend using a higher number of tasks per participant to achieve precise individual-level estimates. However, in this blog, you used only 6 tasks for the MaxDiff analysis, which, according to your book, would primarily yield precise sample-level estimates.
I’m curious: How much of an impact does a relatively low number of tasks have on the precision of individual-level estimates?
Can you elaborate when I should definitely follow the textbook recommendation of 12-15 tasks (depending on the specific MaxDiff)? I would think that a segmentation study needs more accuracy then a simple analysis if there is an item that is highly important to some survey participants?!