Customer Support Performance Metrics

We’ve just spent the last two days talking to dozens of customer support professionals at the Relate Live conference in London.

Many of the conversations we had, covered customer support performance metrics. As the profession moves from simply “helpdesk” to full on “customer experience”, the customer support performance metrics by which teams are measured are coming under the spotlight this year.

There were some recurring themes which jumped out at us this year – echoed by almost every Head of Customer Support we spoke to.

1) Blunt customer support performance metrics are driving the wrong behavior

Many support professionals want better information to provide to management.

This was not because they wanted to performance manage their teams better (although that did come up). Rather, it was because of senior management’s insatiable demand for metrics. Many told me these metrics miss the fine detail needed to understand what’s really going on.

One delegate told me that their board requested “report on report on report” in order to understand customer satisfaction. Typically the board asked for response times, number of responses to resolution, CSAT scores and more. However, he pointed out to me those metrics were too blunt to really improve customer satisfaction.

In his case, shift patterns affect customer satisfaction scores because a European team is supporting US users. Response time varies hugely depending on where their customer lives. Why? Because at certain times, the support team are asleep and can’t possibly respond!

It was also clear that Customer Support Heads want to know more about why a certain CSAT rating has been given. Often it’s not the agent’s response the customer has a problem with, it’s the product itself.

Support feedback gets merged with product feedback

Many I spoke to understood that this product feedback is exceptionally important. They know the business needs to capture it and hear it. But they didn’t want it to affect the agent’s bonus or the company’s perception of the support team.

Customer Thermometer at Relate Live

2) There’s a real desire to put the power to fix customer complaints directly into the hands of the people the complaints are about

Too many measurement systems make it hard for the agent to get hold of customer feedback that’s about them.

Ratings are taken and then amalgamated for management to read. Often the lessons are not shared openly on the ground. This openness is important, so that each agent can fix their own mistakes. And so that agents throughout the support team can learn more quickly.

I was reminded of the Eleanor Roosevelt quote,

“Learn from the mistakes of others. You can’t live long enough to make them all yourself.”

Support heads want to put the power to fix customer complaints directly into the hands of the people that serve them.

CSAT reports alone can’t fix things. Only real data at the ground level which agents can access easily is what fixes things for customers.

Senior management are looking at the CSAT needle. Only agents can move it.

customer support performance metrics

3) Reducing (and measuring) customer effort was a central theme

Many customer support heads we spoke to were interested in the amount of effort a customer had to put in to get a support enquiry logged, dealt with and closed.

In a support environment, when customers are potentially angry or concerned, the “would you recommend us?” Net Promoter question can be highly inappropriate, and even make a fraught situation worse.

A lot of the delegates had experienced this first hand. As a result, they were interested in using their CSAT process and their ticket feedback to find out whether a customer felt the process of dealing with them was easy.

They were also interested in finding out if the customer thought the agent had tried hard, and done as good a job as they could.

Both of these measures take customer support performance metrics in an interesting direction because they go some way to mitigating the points made in both 1 & 2 of this article. Certainly many customer support heads were planning on testing out questions like this over the coming months.

Key takeaway

In summary, the question you ask your customers about how you did, really matters.

Judging agent performance? Trying to improve customer satisfaction? Remember that customer support performance metrics are highly nuanced.

Make sure you think clearly about how you measure agents and team performance, rather than simply adopting a measurement supplied within the ticketing system, or using Net Promoter Score because the account management team do.


We write great articles about customer support measurement, customer feedback and improvement on our blog. Read more here: