CSAT survey get to the-point

People instinctively understand that their time is precious.  It’s given rise to the idea of the ‘effortless experience’ where time is valued; where brands boost customer satisfaction by providing the smoothest possible service.

But then they spoil it all by emailing the obligatory painful customer survey!

The inconvenient truth about customer satisfaction surveys is how much customer dissatisfaction they manage to spread.

Here are five reasons explaining why the shorter a CSAT survey is, the better…

Today’s elongated customer surveys are a throwback to another age and a misplaced methodology

Multiple choice market research surveys were a brilliant evolutionary step forward for how organisations understood their markets.  Back in the 1950s, that is…

While they still occupy an important place in the pursuit of market intelligence, they weren’t designed with the objective of determining customer satisfaction and finding ways to improve it.

More often than not, so-called customer satisfaction surveys morph from simple good intentions into a laundry list of questions that prioritise the collection of data over the spreading of happiness.

Many are even hijacked by colleagues and superiors for the pursuit of other corporate objectives, clouding their original purpose and irritating recipients. See this example survey sent after just a single email interaction!

CSAT survey


The high-volume, low-cost benefits of email/online have turned the long-form customer survey into a false economy

Oh, but aren’t CSAT surveys just so gloriously cheap to execute?  Simply load your contacts onto one of the many free (or ‘as-good-as-free’) email survey platforms, input 20 questions, and send it out to millions!

This would be a utopia if people actually responded to them.  In fact, virtually no one does.  Execution costs are tiny, with no marginal cost increase per additional question asked, and yet the negative impact is incalculable.

Can the results do any good when you factor in the effect of bias on such low-response data?

Non-response, self-selecting and ‘enter anything’ biases are rife in long form customer surveys because they’re just too big

Since when did the prospect of gaining a 1-2% response rate switch from an unmitigated disaster into a fist-bumping victory?  It shouldn’t take a Nobel Prize winning statistician to point out that the prevailingly poor response rates we see today spell overwhelming bias.  Specifically, bias steering you off the scent of how non-respondents feel, and bias toward those who dutifully answer surveys come rain or shine.

Adding insult to injury are the respondents who complete a random selection of responses either to arbitrarily claim the incentive offered, or to ease the pain of completing them.

There is no place for bias when your objective is to learn the truth.

People who do want to engage are deferring or abandoning survey participation because of how long it takes

Janelle Barlow’s celebrated work “A Complaint is a Gift” is essential reading in this space. Feedback is incredibly important but organisations make it incredibly hard for customers to express their opinions in a simple way.  Again we encounter the self-selecting bias of customers so determined to vent their disappointment or anger that they make themselves heard despite the obstacles put in their way.  What about others who aren’t as determined?

The other side of the coin are customers who believe in rewarding good service and reporting their positive experience.  But what if you’re too busy to invest precious time expressing these views, or the invitation to disclose them comes days after the motivation has passed?  This is critically important data that CSAT surveys are supposed to root out.

Timely “one question” CSAT surveys are leading to results hundreds or thousands times better than the old way

Surveys could and should be shorter, but how short can you go?

Ultimately, you can’t get any shorter than one question, and that’s the starting point for increasing numbers of organisations who don’t just want to achieve decent CSAT survey response rates, but customer satisfaction itself.

long versus short format CSAT survey

I’ve been interviewing just some of these businesses for an eBook, Frankensurveys – how not to create a monster, and a consistent picture is emerging about why they do it and what kind of results they achieve.


In a nutshell, one-question CSAT surveys:

  • Consistently achieve response rates of 50-90%, eliminating the risk of non-response bias. That’s up to 50-90 times better than using the old approach.
  • Empower customers to give real-time feedback, providing a new outlet for them to express simple one-click responses a la Facebook/TripAdvisor etc., but on a private basis.
  • Focus on answers that really matter; answers that translate directly into improvements you can make.
  • Can be commissioned and fully collated with virtually no lead-time or backend analytics, as there are no complexities of survey design and minimal points of failure.
  • Are easily ‘injected’ into the customer journey, making it more relevant and far less disruptive to the overall experience.

The evidence from leading brands in many sectors is pointing toward one-question surveys.  Meanwhile poor response rates are killing the traditional customer satisfaction survey.

My advice is to take a moment to scrutinise your current approach to customer satisfaction surveys and embrace the concept that less is more when it comes to length and complexity.

Do your customers think your customer satisfaction surveys are great?  Do you feel like showing yours off to colleagues and peers?

If not, get the Frankensurvey eBook here, no email address or details needed.

Enjoyed this post? Then you’ll love these…