How to Minimise Bias in Survey Research

August 23, 2017

This may sound obvious, but survey research only has value if the results are valid. However, many brands are basing decisions on unreliable surveys.

survey bias.png(You want to avoid these scenarios!)

In this post, we look at how to minimise response bias and nonresponse bias, two ways in which surveys can be skewed, on top of the familiar problem of the sample being too small or unrepresentative. Most market researchers know how to minimise sampling problems, but many pay too little attention to these other sources of bias, which can be even more important.

Screen Shot 2017-08-23 at 15.09.06.png

Response bias is caused by asking non neutral questions or providing non neutral response options, so that the answers are influenced in one way or another. I experienced an example of this only today, when asked by Lloyds Bank if I was “delighted” with the service I’d received

Nonresponse bias, in contrast, occurs when the responses are valid for those who respond, but those who don't respond are different from these respondents in ways that matter for the purposes of the research.

To explain these two biases in more detail - and how market researchers can minimise them - we spoke to Professor Patrick Barwise, one of Attest’s investors and advisers.

Patrick is emeritus professor of management and marketing at London Business School, a Patron of the Market Research Society, and author of successful books on marketing strategy, execution and, most recently, leadership.

How can the wording of questions cause response bias?

“We all know that responses to questions can be sensitive to the wording. Some of the recent work on behavioural economics is actually based on that - asking questions that are technically the same (like the respondent’s estimate of the percentage of people who do or don’t have a particular attribute) but where different ways of framing the question elicit significantly different responses.

“This is why people like me are fanatical about piloting. To make sure respondents are interpreting the question in the way you mean them to, you need to do a lot of piloting. You may also want to ask the same question in several ways, ideally to different pilot samples, just to see if you get equivalent responses.

“In academic research, to get published, you may have to ask each respondent the same question using five different wordings. You then check that the different wordings are all measuring the same construct by looking at how closely the responses correlate, using a statistic rather impressively called Cronbach’s Alpha. This is technically measuring ‘reliability’ (are they all measuring the same construct?) rather than validity (are they measuring the right construct?). All this is enormously tedious for the respondents being asked the same question several times over, but it enables the academic to put the Cronbach’s alpha into their paper in order to get the referees accept it.”

That sounds very technical. What methods can non-academics use to avoid response bias?

A-B Testing.jpg“In the real world, you can tackle response bias in two phases; first, at the qual - pre pilot - stage and then at the quant stage, mainly using A/B testing. By pre pilot, I mean literally going to just four friendly people, perhaps at the client, giving them the question and talking to them each individually about their interpretation of it and how they might respond to it.

“Then you do some proper semistructured qual research: you get people’s responses and then get them to expand on why they gave those responses, still on a small scale; perhaps just a dozen or so people, individually so they don’t influence each other. At that point you should be ready to do A/B testing with different samples using the most promising different wordings.

“A/B testing involves asking differently worded questions to different subsamples, rather than asking the whole sample a question worded in several ways. The number of respondents and the number of rounds you have to do will depend on both the scale of the project and the extent to which the responses hang together.

“If the different ways of asking the question give you the same quantitative answers then it’s pretty robust and it doesn’t matter too much which question you use. You’d probably go for the one that gave you the highest response rate.

“Response bias can be minimised by going through those sort of disciplines properly with an increasing number of real respondents. Most surveys could be improved in this way.”

Are there any types of survey that are especially vulnerable to response bias?

“Yes there are. For example, surveys can be an unreliable source of insights into many aspects of major purchases such as car buying. Part of the problem is that the purchase is spread over weeks or even months. But also, you have to be careful about anything in which the response could make the respondent look better or worse to himself or herself and to other people.

“For instance, if someone is paying a premium for a luxury car which is functionally pretty similar to a mass market car and the real reason why they’re buying it is to show other people and themselves that they’re rich and successful, they’re not going to say, ‘I chose the BMW to look rich and successful.’ What they will say is, ‘I chose the BMW because I love the styling’ or ‘I identify with the brand because it’s a driver’s car.’ Anything in which the responses are not neutral in terms of people's self presentation is problematic.

“If you’ve got different wording in which one version will tend to bias the response one way and another will bias it another way, you can test how sensitive it is to the wording. But sometimes there’s going to be a bias however you word it because any positive response makes respondents look good – maybe by suggesting that they care about the environment or whatever. If that is the case, you need more qual research and piloting to get to the bottom of it. There’s no magic bullet.”

Let’s say you want to find out about a respondent’s alcohol consumption, for example, how can you encourage truthful answers?

alcohol.jpg

“Booze is one classic thing in which most people are in denial and surveying them is not a reliable way to get this information. Another one is people’s use of new technology. I was involved in a video ethnographic study about how much people were using PVRs (personal video recorders). It was actually a precursor to Gogglebox and Channel 4 was one of the sponsors.

“What we found was that people’s self perception was that they hardly ever watched commercials any more. They genuinely seem to have believed that they always watched off the PVR and fast forwarded through the commercials. However, by filming them over a week using their TVs, we found in reality PVR usage was only around 15-20% and they still saw a lot of commercials.

“I’m a great fan of survey research where you’re measuring things like opinions; how much respondents like something. It’s ideal for NPS, for example. However, if a survey is asking something which is either hard for someone to answer or their answers make them look better or worse, you may need to look for another method.”

Can allowing respondents to answer anonymously help avoid response bias?

“Yes, going anonymous can help. For instance with things like 360-degree feedback where people are rating their subordinates, bosses and colleagues, anonymity is crucial.

“If you’re doing an employee survey it’s really important that they believe their anonymity will be protected as they might think they’re going to get into trouble. It’s best to get an outside supplier to do that as they’re more likely to be trusted.”

Avoiding bias is not just about the questions you ask, it’s also about who’s answering.   Why do brands need to be aware of nonresponse bias?

“Market researchers are trained to worry a lot about sample size but if there is some kind of systematic difference between the people who respond and the people who don’t respond, the sample size is usually the least of your problems.

“Nonresponse bias is something you can’t directly observe because the only data you’ve got is from people who have responded and not from those who haven’t, so it’s a bit of a nasty.

“One thing you can do is find another way of reaching a sample of the people who didn’t respond and maybe use more intensive methods to survey just a few of them, but with a higher response rate. Then see if their responses were very different from the ones from your main sample. Another thing you can do is see if the late responses are very different from the early responses which suggests there might be something else going on.”

What kind of response rate do you need to aim for to avoid nonresponse bias?

“If you’ve got more than 50% response then you’d be pretty unlucky if your respondents were very biased. But in a lot of contexts, especially online, where you’ve often got a response rate of only, say, 2% or so, it could well be that the people who are responding are more interested in the product or the issue than the 98% who are not responding, which could seriously bias the results.

“There is a tendency in market research to think as long as the sample is big enough the results are valid but this is one of the big reasons why they might not be.”

Is there a risk of nonresponse bias that comes from the type of people most willing to complete surveys? i.e. professional panelists

“If you’re incentivising people it’s a particular risk. With online surveys one of the things you have to protect against is people putting in junk just to get the reward. It’s obvious if a survey that’s supposed to take 20 minutes comes back in 3 minutes that something is wrong. Your software needs to screen for that.

“Clearly some people are wise to that so they fill the survey in in two minutes and then wait 15 minutes to send it, so if you can measure keystrokes and delays between keystrokes that’s even better.

“You can also test for internal consistency. You can ask about things in which some of the responses are nonsense. For instance, if you’re asking a prompted brand awareness question (‘Which of the following brands have you ever bought or are you aware of?) you can include a couple of brands that don’t actually exist.

“If a respondent picks a dummy answer, you might discard that response altogether or say two responses like that would disqualify them. Sometimes people might think they’ve heard of a brand but they’re mixing it up with something else but it would be a flashing light that the response was less reliable.”

Are there any other alarm bells that bias may have occurred?

“Extreme values you’d look at. Claimed behaviours which seem unlikely, you’d look out for that too. Those things are easier to spot at the sample level than an individual level. If it looks as if you’re getting responses that are way out of line with previous surveys or other people’s research, especially if the other research is based on harder data, then you’d say, ‘We’ve got a problem here.’ That could be either a response bias or a nonresponse bias.”  

Finally, do you have any other top tips for avoiding bias in customer surveys?

“The top tip is do much more piloting than you think you need. Do pre piloting then piloting. Do some A/B testing on a small scale. It’s a very nitty gritty, getting your hands dirty kind of process so you just have to put in the grunt work.”

Conclusion

While you might actually want to sway people’s responses in a particular direction (like Lloyds did with mine), if you don’t aim for a neutral and representative survey, you’re missing the point of market research.

Robust surveys help you really understand your customers and do more than simply provide headlines for your latest PR campaign (i.e. 92% of customers say they are delighted with our service).

Done properly, surveys can offer a true temperature check for your brand and the marketplace, and can provide real, actionable insights. This data can guide your strategy, show you opportunities and prevent you from making costly mistakes - so it stands to reason you want it to be accurate.

Being aware of the possibility of response bias and nonresponse bias means you can take more care when designing your surveys, while basic testing can offer the certainty you need to base decisions on the results

To learn how Attest can help you avoid both types of bias, book a demo today or call us on 0330 808 4746 to chat to our team of market research experts.

Related posts



4 Perspectives on The Brands to Watch in 2018

Who's going to stand out this year? Will brands that took a battering in 2017 make a statement in 2018? Will it be the year of challenger brands or incumbents?

To bring you answers to these questions (and more), we reached out to 4 experts with very different backgrounds across startups, content, social media, experiential marketing and audio to share their unique perspectives on who are the brands to watch in 2018.

Alison Battisby, Founder, Avocado Social

Monzo: The digital mobile-only challenger bank saw nearly half a million new users sign up for its services and claim their bright orange bank cards last year. Monzo is a fantastic way to manage your budget thanks to their instant updates in the app showing you how much you've just spent, and provide added value when used abroad thanks to their free withdrawls up to £200. 

Having just received their full UK banking license from the FCA and PRA in 2017, Monzo is rolling out "the best current account in the world". With their slick app and excellent communication, they are playing to millennials by offering a unique customer experience and we're set to see even more new banking features in 2018. 
Sanctus:  The mental health startup based in London has the vision to create the world's first mental health gym, where people can go and work out their mental health fitness as they would their physical fitness. Right now, the company is working with businesses to create space within a company for people to take time off and talk to a Sanctus coach. In 2018, the company aims to work with 50 business partners and continue to spread awareness of mental health. Founder   James Routledge   writes an excellent weekly newsletter on mental health and growing the startup, which is honestly written and is well worth a read .
 
Neom Organics:  Hot off the heels of significant new investment, this Harrogate-based beauty and wellbeing brand is set to launch a new range of products in 2018, as well as new retail stores both in the UK and abroad. Neom was found by two friends, one of which was an ex Glamour magazine editor who realised her own wellbeing, and that of her close friends, was affected by the stress and demands of modern life. She quit journalism to train as an aromatherapist and nutritionist before founding Neom. The brand's products focus on improving people’s wellbeing through home fragrances and skincare. 
 
Adam Azor, Managing Director, Curb
 
My first pick is Pepsi. Lets be honest, Pepsi had an awful 2017 from a brand perspective, they created what they thought was going to be a work of advertising art, an ad that would change the world, but instead it turned them into a global laughing stock.
 
This is also on a backdrop of huge backlash and increased legislation against sugary drinks. The days when all they had to worry about was competing against Coca-Cola are probably looked on with nostalgia by the marketing team. However Pepsi are a brand with true marketing pedigree, iconic campaigns, partnerships and experiences.
 
 
I’m really interested to see how they come back. The test of a great brand is how they react when they are at their lowest. I will be watching Pepsi closely in 2018 to see what they have planned.

My second one to watch for 2018, is the darling of the Aim, BooHoo. The online based fashion retailer has gone through exceptional growth over the last few years, along with some very smart acquisitions.

However they are now at the point where brand building is becoming as important as performance marketing. I expect an innovative business such as BooHoo to evolve its marketing activity to ensure it not only continues its business growth but becomes a brand leader in its own right.

This will be a year to watch brands take the design aspect of their branding in new and exciting directions.

Posted by Mark Walker on January 16, 2018


10 Ways To Understand and Shift Your Brand Perception in 2018

Think you know what your brand represents? I’ll let you in on a little secret; it’s not what you say in your slogan, brand values or advertisements. It’s whatever consumers say you are.

Posted by Bel Booker on January 09, 2018


5 Things That the Strongest Brands Focus on

Nothing lasts forever.

Posted by Mark Walker on January 08, 2018


5 New Year Resolutions to Help Your Brand in 2018

New Year resolutions don't just have to be for individuals. The idea of improving, quitting bad habits and enjoying a more productive, successful year can be just as valid for business and teams. 

Here are some resolutions you may want to consider for your brand to avoid the pitfalls others experienced in 2017.

Posted by Alex Rees on January 05, 2018