Customer Effort Score

Find out why or why not Customer Effort Score (CES) outperforms Net Promoter and Customer Satisfaction scores. Is it better to satisfy rather than delight? Should 'making it easy' for your customers be your biggest priority?
Customer Effort Score (CES) is measured by asking a single question: “How much effort did you personally have to put forth to handle your request?”

27 February 2012

Satisfy, Don't Delight

By Dr. Frederick Van Bennekom
Summary. "Delight, don't just satisfy” has been the mantra in customer service circles for many years. Satisfied customers are not necessarily loyal was the underlying assumption. Now a research project by the Customer Contact Council of the Corporate Executive Board argues that exceeding expectations has minimal marginal benefit over just meeting expectations. In essence the authors argue that satisfaction drives loyalty more than the mysterious delight factors. This article examines the argument, and specifically looks at its shortcomings in how they establish the loyalty link.

The holy grail of long term company profitability has been knowing what drives loyal behavior on the part of our customers. What gets them coming back again and again? What drives them away? How do we identify the disloyal ones to win them back? Various researchers from Reichheld's Net Promoter research to Keiningham & Vavra, Improving Your Measurement of Customer Satisfaction, have argued we have to distinguish the attributes that satisfy from those that delight. A satisfied customer may buy again, but a delighted customer is far more likely to be loyal. That's been the argument.

Stop Trying to Delight Your Customers” in the July-August 2010 Harvard Business Review argues that the past research is flawed and leads to wasted effort. Matthew Dixon, Karen Freeman, and Nicholas Toman of the Customer Contact Council (CCC) addressed three questions in their research:

How important is customer service to loyalty?
Which customer service activities increase loyalty, and which don't?
Can companies increase loyalty without raising their customer service operating costs?

Here I'll summarize their research as reported and then discuss some shortcomings.

The research project surveyed 75,000 B2B and B2C customers across the globe about their contact center interactions along with extensive interviews of customer service managers. The published article doesn't include the actual survey instrument or the details about the administration process, but we can infer many of the questions measured attributes of the service experience and the attitudes created on the part of the respondents, along with a slew of demographic data that were used as control variables in the analysis.

The authors argue “that what customers really want (but rarely get) is just a satisfactory solution to their service issue” and they have a new measure for loyalty. To paraphrase Reichheld, “forget everything you've ever known about loyalty research” - or do you? The authors list two critical findings for customer service strategies:

First, delighting customers doesn't build loyalty; reducing [the customer's] effort - the work they must do to get their problem solved - does. Second, acting deliberately on this insight can help improve customer service, reduce customer service costs, and decrease customer churn.

Indeed, 89 of the 100 customer service heads we surveyed said that their main strategy is to exceed expectations. But despite these Herculean - and costly - efforts, 84% of customers told us that their expectations had not been exceeded during their most recent interaction.

To summarize their argument in different words, companies should focus on reducing dissatisfaction, not maximizing satisfaction. I cringe at that statement since it's what US-based airlines practice.

Although customer service can do little to increase loyalty, it can (and typically does) do a great deal to undermine it. Customers are four times more likely to leave a service interaction disloyal than loyal.

The loyalty pie consists largely of slices such as product quality and brand; the slice for service is quite small. But service accounts for most of the disloyalty pie. We buy from a company because it delivers quality products, great value, or a compelling brand. We leave one, more often than not, because it fails to deliver on customer service.

Reps should focus on reducing the effort customers must make. Doing so increases the likelihood that they will return to the company, increase the amount they spend there, and speak positively (and not negatively) about it - in other words, that they'll become more loyal…

The immediate mission is clear: Corporate leaders must focus their service organizations on mitigating disloyalty by reducing customer effort.

The authors' new contribution to customer metrics is their Customer Effort Score (CES), which is based on a new survey question: “How much effort did you personally have to put forth to handle your request?” (Frankly, the wording confuses who is “handling the request.” I would have written: “How much effort did you personally have to put forth to get your request addressed?”) The question is rated on a scale where 1 means “very low effort” and 5 means “very high effort.”

[Editor's Note: Some people who have read this article think I am being overly kind in my assessment of that question's wording. "What does that even mean?" was one comment. Since this question forms the entire basis of their proposal for a new attitudinal measure, its wording is critical. If it's ambiguous, that makes their whole argument dubious.]

Their research found that CES had strong “predictive” power for both repurchasing likelihood and future amount of purchases, which were their measures of loyalty -- more on that later -- and that it was a better predictor of loyalty than was the overall satisfaction question (CSAT) and Net Promoter question (NPS). They claim it's better than NPS since NPS captures a customer's view of the company as a whole, which is one of my main problems with NPS, while CES is more transactional oriented.

Beyond using the CES question, the authors discuss five key recommendations:

1. Don't just resolve the current issue; head off the next one.
2. Arm reps to address the emotional side of customer interactions.
3. Minimize channel switching by increasing self-service channel “stickiness.”
4. Use feedback from disgruntled or struggling customers to reduce customer effort.
5. Empower the frontline to deliver a low- effort experience. Incentive systems that value speed over quality may pose the single greatest barrier to reducing customer effort.

All of this sounds enticing when supported by sound research, but where are the weak spots?

First, I am always skeptical about findings when I don't get a clear picture of the research methodology. This article was not the practitioner version of an academic research paper that had been submitted to an academic journal's peer review process, which would require clean methodology. The reader is left to draw many inferences about the methodology.

No links to the survey instrument are provided. We know the CES question is posed on a 1-to-5 scale but it appears the “loyalty” questions are posed on a 1-to-7 scale, based on a chart provided. We don't know how the respondents were identified and solicited or when they got the survey in relation to the transaction completion. While we learn that 84% of the respondents said their expectations were not exceeded, we don't know how that 84% breaks down between expectations met and expectations not met. A chart shown with no hard data implies a weak correlation between CES and CSAT which seems hard to believe. The methodology could be exemplary, but the article does not provide enough background to remove my skepticism. If I don't feel comfortable with the methodology, I take any findings and conclusions with a giant grain of salt -- in this case, several giant grains.

Second, the researchers appear to have defined implicitly -- not explicitly -- delivering “delight” as “exceeding expectations” but they didn't measure what customer expectations were. Previous researchers posit that some attributes are satisfiers while other attributes are delighters. They have said that exceeding expectations on satisfiers buys little, which jives with the findings here, but the CCC authors do not appear to have attempted to identify which attributes are delighters versus satisfiers, a lesson from Kano analysis.

Further, previous researchers present an important distinction between delighters and satisfiers that the authors don't address. In order for the delight attributes to have an effect, the satisfier attributes have to be delivered. Consider a hotel stay. If the room is not clean -- a satisfier -- exemplary performance on delight attributes buy little to nothing. Rather than test this hypothesis, the researchers dismiss delivering delight attributes as wasteful.

Third, following on the above point, we usually talk about companies raising the bar -- what was unexpected now becomes expected -- but contact centers in general have lowered the bar -- what was once expected now becomes the unexpected -- through the drive to offload work onto the customers. The authors state that you can create loyalty by satisfying -- not delighting -- the customer through the delivery of good, basic, reliable service. Personally, if I can talk with a live person quickly without having to navigate some annoying phone menu, and have a courteous interaction with an intelligent, knowledgeable person who resolves my issues quickly while instilling confidence, I wouldn't be satisfied. I'd be delighted, which is a sad statement on the state of service. Perhaps, CES is actually a delight attribute? Again, the authors don't discuss what is a satisfier versus a delighter.

Fourth, I would like to know what research led them to the hypothesis that CES attribute of service delivery would correlate highly with measures of “loyalty.” I'm just curious…

Fifth, the authors claim CES has strong “predictive” power for loyalty, performing better than the NPS or CSAT questions that apparently were in their questionnaire. Repurchasing and increased purchasing, along with word-of-mouth comments were their measures of “loyalty.” In their study, CES showed a correlation to intended future behavior of those loyalty measures, not actual future behaviors. Would people's intended future behavior be different immediately after a poor service experience, as apparently captured in their surveys, than later?

One of the strengths of the NPS research is that those researchers performed longitudinal research. They compared the NPS scores for specific companies to future company profitability. (Note: other researchers have not been able to duplicate those findings.) Here the authors did not perform such research. To claim predictive powers for CES is a semantic stretch not justified through this research.

Lastly, it is important to note that the authors investigated contact center customer service for both product-based and service-based companies. In many, if not most, of these situations, the contact center is providing remedial service. The very definition of remedial services means that the service is likely to only be a satisfier. No one wants to call for remedial service; it's compensating for a failure in the core product or service. We are not told the difference in CES predictive power for remedial versus non-remedial contact center experiences. This distinction is important. The authors claim their findings provide lessons beyond contact center services by their use of hotel and airline service examples when building their argument. Generalizing these findings to the point of saying that service organizations in general should ignore delight attributes is dubious and unwarranted by the research.

The authors' five recommendations make sense, and CES may have value as a new customer metric for remedial customer service, but as with NPS, I'm not sold.

Addendum, Monday April 4, 2010... However, the Wall Street Journal newspaper subscription service has been sold on this metric. I have had truly horrific delivery experiences with my Journal subscription for almost a year. Today, I called to report the third day in a row with no delivery (sic). I agreed to take their IVR post-call survey. This survey exhibited many of the problems with IVR surveys. It asked questions only about my call center interaction, but I was calling about my delivery service. The agent was fine -- as have the agents been every other time I have called to complain. But my complaints appear to do no good. I have the same lousy delivery service from the same driver. If you look at the scores I gave, it would indicate that everything was okay. I was asked the NPS question to which I gave a mediocre score. (They didn't ask me to qualify my response as "based upon your experience with the agent today...")

But then they asked me how much effort I had to use today to get my issue addressed. I gave the lowest score on the scale. Will the analysts be able to tell that I am basing my score on the extended relationship where I have had horrific service? I doubt it. My effort today was minimal. I made a phone call. But I have continually complained to no avail. Thus my low score. You might say that this shows the importance of CES, and you'd be right -- if the question had been phrased correctly and if the survey was properly positioned to look at the extended service interaction.

Earlier in this article I made the point that you must first deliver basic satisfiers before attetmpting to delight. Fixing this incompetent delivery service would be delivering a basic satisfier, which is delivering the service which I have contracted with them to do. Trying to "delight" in this circumstance would be wrong, but it's not the amount of work that I have had to do which has me ready to cancel my subscription. Rather, it is the fact that the core issue has never been addressed despite repeated complaints. Plus, the survey has no open-ended question where I could have explained my scoring in the hopes that some responsible manager would actually see my complaint.

This is not the only time I have seen the Customer Effort question put in a survey without deep thought to how it is positioned in an extended customer-company interaction.

It's a good thing I love the darn paper... And the iPad app is a fantastic option.

Dr. Frederick Van Bennekom is Founder of Great Brook Consulting and author of Customer Surveying, A Guidebook for Service Managers. www.greatbrook.com.

http://www.icmi.com/Resources/Articles/2012/February/Satisfy-Dont-Delight