When thinking about qualitative and quantitative methods of doing research, it is a bit like with tools: while both a hammer and pliers would (somehow) get a nail into a wall, one tool would do it better and more efficient than the other. And if we blend tools – one to get the nail into, and one to get the nail out of the wall, we can achieve true excellence. The same applies to research methods: quantitative research is important in its own right, but it is not the "answer to the ultimate question of life, the universe, and everything” – sometimes qualitative techniques serve the purpose better. One application area that is pre-destined for being qualitative-led is design research within human-centered design.
Human-centered design principles excel at providing organizations with a different lens for problem-solving. Design researchers often go out and interview and observe the people who use the products. But how many participants are required to gain relevant insights?
Since design research is a form of qualitative research, the approach to sampling must differ from quantitative methods, which seek to achieve statistical significance. Too often, organizations apply quantitative criteria for sampling to design research—a fundamental misunderstanding of its purposeful logic.
Quantitative approaches to research often aim to test predetermined hypotheses and produce results that can be applied to the general population. In contrast, the qualitative techniques used in design research focus on understanding more complex psychosocial or usability issues and are most useful for answering “why” and “how” questions such as, “How would a user interact with a product in a certain environment?” Because their aims are different, the success of qualitative research cannot be governed by an approach using performance indicators that were developed for an entirely different context.
Researchers can get their sampling method right by ensuring they are clear on the purpose of their research and then following best practices for qualitative sampling.
Seeking the right problem to solve: Applying quantitative logic to qualitative inquiry
In the business world, numbers are king. Having a number as the answer to a problem instills credibility in corporate settings that are largely dominated by market researchers, data scientists, and executives with backgrounds in quantitative methods. Thus, answering the question “How many participants should be in a sample size?” with a firm number seems reassuring. But the question of whether a sample size is statistically sufficient stems from a very quantitative logic—and is not meaningful when applied to qualitative design research approaches.
In addition, qualitative methodologies were long considered as more of an art form and seemed difficult to mold into a structured methodology with easily verifiable criteria. As a result, the readily available criteria from quantitative methodologies were applied to qualitative approaches to fill in the gaps and assess whether this approach clears the bar using quantitative performance indicators such as objectivity, validity, and reliability. Rules for determining the right sample sizes are mathematically derived from the optimum number necessary to enable valid inferences to be made about the population.
In qualitative inquiry, however, there are no rules for sample size; the number of participants depends on what researchers need to know, the purpose of the inquiry, the business problem at stake, what lends credibility to the study, and what can be done with limited resources. In short, researchers must focus on purposeful sampling rather than adhering to quantitative constructs.
Solving the problem in the right way: How researchers can get sampling right
Since there are many design problems out there, the right choice of sampling technique must always be a case-by-case selection, depending on the purpose that the sample shall fulfill.
Criteria such as age, gender, status, role, geographic location, function in the organization, beliefs held, and so forth, may serve as good starting points for sampling.
More advanced research typically wishes to move beyond such sociodemographic measures in order to work more strategically. In doing so, there generally are two main directions to take: depth or breadth of the subject of inquiry. While the most typical, study-relevant, and information-rich cases (depth) have a huge payoff because they tend to reveal the average, they can also bear the risk of sampling too narrowly. Therefore, design researchers are well advised to include the peripheries (breadth) of a population in their sample: people who may not be the primary audience or are no longer actively involved with a product or service. Such groups are worth investing time into because they can help researchers obtain contrasting and comparative data, thus shedding light on a phenomenon in entirely different ways.
We simply cannot know in advance how many respondents are needed. The true challenge is to find a good answer to the question: How well does my sample help me to solve the design problem at hand? We then stop sampling when no new significant insights arise. Time and budget constraints will likely suggest stopping a little before this point of saturation. Methodologically, this is not a problem. Not reaching saturation does not invalidate earlier findings, but is rather an indication of not having covered the subject matter comprehensively. In a practical world, such exhaustive coverage is not possible since human behavior, and therefore the number of emergent themes, is potentially limitless.
Thus, the recipe for success is to identify the right problem and then take a purposeful enough sample to answer it appropriately.