Some CSAT surveys are doing better than others. Maybe you have highly engaged customers, or a cool incentive that drives your CSAT survey. Maybe yours is shorter and punchier than most; only 4 or 5 questions rather than 14 or 15. Maybe you’re achieving a 5% or even 10% response…

Unfortunately, any representative customer survey that achieves a low response rate suffers from statistical bias. In other words, your CSAT survey isn’t telling you the truth.

It’s time to confront the institutionalised notion that elongated customer surveys are good for customer satisfaction. As you scale the challenge of boosting response rates and getting to the heart of improving customer satisfaction, be sure of exploding the following five myths.

MYTH #1: Surveys are long because we’ve always done it this way

The traditional 10-question customer satisfaction survey has a lot in common with the multiple-choice market research questionnaires originally developed in the 1940s. This is no coincidence. Such was the astute brilliance of customer questionnaires and their impact on the development of market intelligence in the post-war era, that when some bright spark adopted the entire methodology and applied it to the quest for customer satisfaction, nobody challenged it.

You need to ask the question, are they fit for purpose?

MYTH #2: A survey designed by committee brings benefits to all

Stripped down to its bare bones, a purposeful and focused customer satisfaction survey appears conspicuously short; too short to be an efficient use of corporate resources. What’s more, if the people in charge of understanding customer satisfaction are the only ones allowed to design its questions, this casts a disempowering and wholly undemocratic shadow on the rest of the organisation.

Such misguided thinking gives rise to the Frankensurvey – a monstrous, oversized collaboration of disparate parts unwittingly borne out of a quest for perfection.

MYTH #3: Don’t worry about the poor response rate, we’ll just ask another million customers

Ignoring the impact of bias and focusing purely on the response rate is a pragmatic – albeit slightly delusional – way of making the survey participation challenge into a pure numbers game. If upping the response rate is too difficult because the survey itself won’t be compromised, then attention turns to increasing the number of actual responses to a level deemed ‘representative’. Tactics for achieving this objective typically involve repeatedly issuing the survey to the same respondent, and/or extending the overall pool of respondents. The cost of doing so is negligible, so why not fill your boots?

Bear in mind that to achieve a sample of 1,000 customers on a 1% response rate means pestering 1m customers, 999,000 of who will regard the invitation with indifference, despair or somewhere in between.

MYTH #4: All these stats are going to be unbelievably useful when I present them to the board

The results of your Frankensurvey are in, and its extensive length and complexity has been thoroughly vindicated by the sheer volume of data amassed. Should anyone wish to explore it, there’s plenty of detailed justification for strategies adopted and processes implemented.

What’s missing is the clarity and accuracy of the objective: to improve customer satisfaction. As Seth Godin put it, “Don’t ask a question unless you truly care about the answer.” Are you insulating yourself with data, or are you engendering happiness?

MYTH #5: The customer journey is one thing and the customer satisfaction survey is something else

Organisations rightly pull their resources and intellect into the optimum ‘customer journey’. Everyone in the company prioritises and values it. You’re all about ‘injecting delight’ and ‘promoting effortlessness’. You get a tangible ROI from shaving nanoseconds off the time it takes for a payment to process via your website.

So why on earth are so many organisations failing to make customer satisfaction surveys a seamless part of the customer journey? Is it good enough to put so much effort into a positive customer experience, just to undermine it all by emailing a 10-question customer survey three days later?

Short customer surveys offer better response rates and relevance

If this all sounds a lot like the way you’ve been approaching CSAT surveys to date, then you aren’t alone. However, you could learn a lot from the results reported by a growing band of organisations who’ve embraced the radical concept of not just a shorter survey design, but the ultimate in short: the one-question survey.

I’ve researched this for our free eBook, Frankensurveys – don’t create a monster and the big headline grabber is that one-question surveys regularly achieve 50-90% response rates; hundreds or thousands of times better than the old way of doing it.  Stacked up against the myths above, it’s an approach that:

  • Is fit for the purpose of understanding and improving customer satisfaction.
  • Guards against bias and the threat of spreading dissatisfaction.
  • Produces actionable, real-time intelligence for improved customer satisfaction.
  • Can slot in as a seamless part of the customer journey, rather than a jarring interruption.

And rather than throw away the long-form customer survey, these same organisations are finding other, more specialised uses for this approach, alongside focus groups and other qualitative research methods.

So here’s one question for you: why not look into it?

Grab a free trial account now, no credit card needed, 5 second signup…