Are Surveys the Market Research They Claim to Be?
The marketing world is teeming with people ready to decry traditional market research tools.
Their nihilistic fire is often trained on focus groups and surveys as the evil twin towers of what they might claim is false guidance.
I spent many years moderating focus groups but let’s put that aside today and focus on the good old survey.
Trained back in the old days, we were taught “garbage in, garbage out” and noone can deny that questionnaire construction matters.
The first rule is to ask questions that are “reasonable” for a respondent to answer truthfully (and easily). Ask them for opinions, perceptions, concerns, motivations, needs, etc. Stuff they will inherently and instantly know with little cognitive effort. There is no other way to get to these without asking questions, after all.
Tools like choice models, where respondents express a preference between iterations of two things, can also get to important learnings.
Questionnaire design has many other necessary nuances to get to “good” outputs. In a perfect world test the questionnaire qualitatively (take a few people through it for feedback)
For example, these all matter:
- Explain the point of the task, making the respondent feel valued and motivated (but watch out for introducing bias!)
- Ask easy to understand questions each with as single a possible interpretation as you can get to. Remember some may interpret your question differently to you.
- Give respondents options that cover the main potential answers (don’t miss key things out) and have an “other” choice for the rest and a “don’t know”). Never force a response.
- Keep it brief. Brief overall, and short lists of choices for any one question (break into two if needed). Its unrealistic to get good attention for more than say 10 minutes. In fact in today’s Tik Tok world perhaps 5 minutes is a better goal.
- Put the important stuff first, when minds are fresh
- Ask about opinions etc before asking for a conclusion/decision (order effect issues)
And then, of course, you must ask the right people.
All sorts of bias can creep in without due care on recruitment. One that we watch out for in particular is avoiding bias from purchase patterns.
For example, asking shoppers of retailer A about retailer B will get a different answer to asking shoppers of retailer B about retailer B.
A point often missed is bigger brands do “better” simply because more people buy them and know them well. Ideally, the sample needs to be properly balanced (or weighted in the analysis)
Yes, bots and click farms are a nasty issue so your survey provider should use the latest screening software and your data should include the usual test questions and traps. I note there are even good ways to trap AI (e.g. asking ‘daft’ questions that a human would notice but AI won’t.)
Particularly given the above, where there is no 100% guaranteed protection, sample size is your friend. In any survey, the risks of being misled get less and less as your sample grows.
That’s not just about statistics, it’s common sense. I get frustrated seeing huge conclusions (headlines based on small samples.
Now a human factor on your side of the fence: watch out for confirmation bias. The commissioner of the research (and even the agency) may bring agendas that they then knowingly or unwittingly use in their choice of what to conclude. A data point could be “surprisingly high” or “surprisingly low” depending on your preconceptions.
We here are passionate about benchmarks and norms. Most data points need to be compared to something (even if just an average) to bring meaning.
I was involved in product testing and from data derived from 100s of tests and sales outcomes we could easily see that a product scoring 4.2 was an order of magnitude more likely to succeed in market versus one scoring 3.9. 3.9 was merely an “average” result even if it looked encouraging at first sight.
Finally, on the “people don’t do what they say” point.
Proven methodologies on eg product or ad testing never simplistically take the direct respondent answer as the final word. They correlate many studies to real life outcomes to infer that a survey answer gets to the desired meaning. So, you can get an accurate steer on “what they do” – even if indirectly.
And one could easily reverse the argument. You may know what people do (eg track sales) but if you have no idea why they do that, then you are none the wiser on developing your marketing strategy. As with so many aspects of marketing this is about “both-ism”. We need to measure behaviour. We also need to understand “the why’.
Most good surveys are trying mainly to understand customers’ minds. Not to ask them to reach a judgement. Trying to replicating real life is a fool’s errand. Better understanding informs marketing thinking. Its not just a 1+1=2 situation. We never expect a survey to directly give you the end decision.
We expect it to inform the decision. To leave you far better equipped to make that decision or to improve your strategy. Often as one input alongside other tools eg other data sources, A/B testing and indeed informed gut feel.
And a final word.
Research has to be judged against knowing nothing (ie guessing!) not against a mythical ‘perfect understanding’. If research could guarantee an 100% correct answer we’d not need the expert judgment of marketers – so we’d all be out of a job.
Surveys aren’t a magic bullet but they’re a powerful ingredient of success. They need to be used wisely, executed well with its limitations understood.