How We Measure Customer Service Matters

If there are still any doubters out there regarding the generational shift from call centers to chat-based contact centers, take a look at this survey data from analytics firm Dimension Data. Preference for phone-based customer service drops from 90.4% in the oldest age-profile to 12.3% in the youngest age-profile:

blogpost 2 graphic of channel popularity by age profile

As we state in big bold print on our website: your next generation of customers isn’t going to pick up the phone to call you. At the same time, 80% of businesses complain that the technology isn’t keeping pace with these changes in consumer demand. With all of the change that is coming to call centers, we’ve been thinking recently about how the way we measure customer service success changes as well. As we move to chat and mobile messaging interactions, the qualities we measure, such as customer satisfaction and agent performance, do not change, but the way we measure them does.

Consequences of using the wrong metrics

As discussed in our previous blogpost, the fact that mobile messaging is asynchronous drastically devalues two of the traditional call center metrics: ticket resolution and handling time.

Using outdated agent performance metrics not only gives us an inaccurate picture of our service, but also risks promoting the wrong behaviors in our agents. For example, if agents know they are being judged by their handling time, we often see them push for an unnatural close of the conversation. We occasionally even have clients ask us to implement automated close messaging, such as: Is there anything else we can help you with? If you don’t respond in five minutes, this conversation will automatically close. Not only is this a poor customer experience, there’s also just no benefit to pressuring the customer to end a chat. Chats only “close” from the agent’s perspective, not from the consumer’s perspective.

While we still report handling time, we caution clients not to rely too heavily on this metric. Average handling time gives context to customer experience, but it’s imprecise visibility into utilization of contact center resources. A lot of what is counted in handling time may be agent idle time, or the agent jumping into another chat, while waiting for a response from the customer.

The real danger is targeting a goal handling time for each chat. Not all customer service events carry the same urgency. Handling time matters when a passenger got bumped from a flight and needs to re-book, but it doesn’t really matter when changing a passenger’s meal option on a flight that is still two weeks out. The transition to messaging-based contact centers brings with it much greater flexibility to intelligently prioritize chats. This just isn’t an option for call centers. An agent can only ask a customer to wait on hold for a very short amount of time before significantly affecting customer satisfaction. In contrast, messaging allows agents to have multiple simultaneous conversations open at once, constantly shuffling priority with a simple “One moment, please.” and “I’m on it!”. The customer can put their phone down and wait for the push notification telling them the task is complete.

Focus on customer experience

The asynchronous nature of messaging does present a challenge to contact center managers to update their thinking. The logical approach is to consider to what degree metrics like handling time, closed tickets, and first call resolution are still relevant to customer experience. Your team’s first response time, for one, is a much more powerful indicator of good customer experience than handling time. You may also find that tracking ticket resolution or first call resolution is difficult to automate for asynchronous messaging — these are probably better moved to customer satisfaction surveying.

In the end, the best way to judge your customer service success is still to let your customers tell you how you’re doing. These metrics, namely Customer Satisfaction Score (CSAT), Net Promoter Score (NPS), and Customer Effort Score (CES), don’t require updating because they quantify customer sentiment. The quality of the customer experience will continue to be the ultimate barometer for agent efficiency and team productivity.

*For more information regarding NPS please click on this link to Forrester. **For more information regarding  CES please click this link to HBR.

*For more on NPS, check out these posts by Forrester and Bain.
**For more on CES, check out this HBR article.

Tagged on: