A core aim of your customer satisfaction surveys should be accuracy. Sometimes the truth hurts, but the truth of what your customers think is absolutely paramount. As a result, writing unbiased survey questions could be the most important piece of the puzzle.

Surveys are designed to elicit feedback, so it’s all about asking the right questions the right way. Many surveys ask biased questions, both intentionally and unintentionally. These inappropriate questions undermine the customer’s capacity to tell their truth.

Why unbiased survey questions are so important?

There’s a big business impact to biased survey questions. They inevitably lead to inaccurate results. Organizations that then make decisions on the back of this data are at great risk of making bad choices. Unbiased survey questions, on the other hand, allow confident and informed business decisions.

Telling the difference between biased and unbiased survey questions

Biased survey questions use words and composition in a way that influences a respondent’s answer. Sometimes this is done on purpose. Typically, it occurs as a result of poor survey design and evaluation.

The key to asking unbiased survey questions lies in knowing what a biased question looks like. You then steer clear of the characteristics of these questions when designing your survey.

Types of biased survey questions

There are 5 main kinds of questioning bias:

  • Leading questions
  • Loaded questions
  • Double-barreled questions
  • Absolute questions
  • Confusing questions

Leading Questions

Leading questions use language and structure that leads a respondent toward a particular answer. At first glance such questions often appear to be relevant and reasonable, even though they are pushing specific responses. Here are a couple of examples:

How much did you enjoy our wonderful new service?

Are you looking forward to our great new store opening?

Both examples contain leading statements (‘wonderful new service’ and ‘great new store’). Positively charged adjectives like ‘wonderful’ and ‘great’ should not be included. This is because the data results will presume that respondents agree with these statements. 

A good practice is to ensure that questions contain no subjective opinion. The status of the service/store as ‘new’ can be accepted, within reason, as objective fact. For example, a 6-month old store/service is not new in anyone’s book. However, ‘wonderful’ and ‘great’ are statements of opinion. When composing survey questions it’s worth testing them to ensure that they are not leading responses.

Loaded Questions

Loaded questions tend to force respondents to answer in a way that may not accurately indicate their true opinion or experience. Here’s an example:

Where do you like to go on foreign vacations?

Asking this question assumes the survey participant takes vacations abroad, which may not be the case. This can result in respondents providing inaccurate and non-representative answers. 

For example, someone who has never been abroad in their life could answer “Brazil” because the idea of going to Brazil is appealing. But that’s not what the question is asking. It’s always best to identify and avoid loaded questions. In this case, it would be better to qualify respondents first i.e. “Do you take foreign vacations at least once every 5 years?” And then follow this up with a non-loaded question about preferred locations.

Double-barreled Questions

A common unintentional mistake is asking two survey questions in one. Here’s an example.

How satisfied or dissatisfied are you with the product and service that you have received?

It should be clear that this question is asking for customer satisfaction/dissatisfaction feedback. The answer response presumably has a sliding scale allowing a range from very satisfied down to very dissatisfied. 

But also note that the question concerns both product and service. What if the customer is satisfied with the product but dissatisfied with the service? How can they accurately answer this question? 

Survey questions should always be written in a way that examines just one thing at a time. To elicit genuinely representative feedback, this question should be restructured as two clear and concise questions.

Absolute Questions

Absolute questions tend to ask for yes / no answers. These can result in bias as respondents are not able to provide more representative feedback. Here’s an example:

Are you always happy with the service we provide? (Yes / No)

Using the word ‘always’ means that many respondents are likely to answer ‘No’. Even if they are actually happy most of the time! Absolute questions often use the words ‘always’, ‘all’, ‘every’ and ‘ever’. Restructuring this question depends somewhat on the desired insight. 

Is the business keen to learn how happy customers are with the service? Or is it to discover how frequently each service experience meets or exceeds expectations? These are subtle differences that demand care in using appropriate words. In any case, it would best be asked in a non-absolute manner, allowing the respondent to choose their answer from a scale.

Confusing Questions

Confusing or complex questions will lead to bias as the responses will be affected by the lack of understanding. Poor grammar, question structure and technical jargon can also result in confusing questions. Here’s an example:

Do you think it’s possible or is it impossible to improve the performance of the ‘abc-123’ product?

This is an inelegantly formed question on a number of levels. Firstly, it challenges the respondent about the concept of ‘possibility’ on a subject over which they have no control. Asking customers about the ‘possibility’ of them trying a new restaurant dish is fine. But the possibility of a performance improvement to a product? 

Secondly, it asks about possibility/impossibility in different ways (i.e. ‘do you think’ and ‘is it’). ‘Do you think’ is inviting the customer’s opinion. ‘Is it’ is testing their knowledge and understanding. The customer can only give one response, so how can that be properly evaluated?

Thirdly, it assumes that the respondent has some knowledge of what an ‘abc-123’ is. Not only that, but some insight into its current and potential performance. The results of asking the survey question would be unusable. 

Test Your Unbiased Survey Questions

It’s not a trivial task to compose survey questions that will deliver unbiased, representative customer responses. When composing survey questions it’s always worth testing and evaluating them. Here are some tips.

  • Review your questions to ensure that they will not create response bias.
  • Ensure that answer options cover all possible responses.
  • Make questions as short and simple as possible.
  • Try to avoid making participants think more than they need to.
  • Test your survey question on some trial respondents and invite their feedback.

You get 10 free surveys to test and trial with Customer Thermometer. You don’t even need to enter any payment details. Just complete the form below and away you go!