Last summer, my apartment complex sent me a survey via email after a maintenance request had been taken care of.
“We'd like to hear your thoughts on your recent experience with us! Please let us know how we're doing by answering one quick question. Based on your recent service request experience, how likely are you to recommend [Apartment Name] to a friend or colleague?”
There were buttons labeled 0 (Not at all likely) through 10 (Extremely likely) for me to click to submit my response.
Using the Wrong Survey at the Wrong Time
Most of you probably recognize this as the Net Promoter® question used to derive a Net Promoter Score℠ often referred to simply as NPS®. Perhaps you use this question today in post-interaction surveys. There’s just one problem – I would never consider recommending an apartment complex to a friend or colleague based solely on a maintenance request. If I were to use one of the most popular apartment search engines on the internet, it would give me dozens of search criteria – bedroom count, bathroom count, amenities, pet policy, budget, square footage, rating, etc. – but it will NOT give me an option to filter by maintenance request NPS® because almost NO ONE would factor that into their decision!
This is just one example of how surveys like NPS® can be misused. Had they sent me a CSAT (Customer Satisfaction) survey and just asked me to rate my maintenance request experience, that would have made sense. But NPS® is an entirely different kind of measure and one that should not be tied to a specific interaction because it doesn’t provide actionable feedback.
Tying Agent Compensation to Survey Results
But there are other mistakes that an organization can make with surveys aside from using the wrong one at the wrong time. In some instances, coupling front-line employee compensation with survey results can lead to unintended consequences, like simply not helping customers. Yes, “Voice of the Customer” is important to hear, but customers are human – they don’t always separate the empathetic and helpful agent from a lousy experience that has caused them to call.
Ignoring Critical Factors of Customer Effort
Customer Effort Score (CES) surveys are gaining traction as well, but keep in mind that the effort the customer had to put forth starts with figuring out how to contact you. From there, they might have to navigate a complex IVR and resort to shouting “Representative!!” into the phone hoping to just be connected to a live person. And once they’ve navigated that, they might wait in queue for what seems like an eternity before they’re connected to an agent who now has to make this interaction seem effortless to get a good survey. Agents can make a huge impact on customer perception, but there are a lot of things out of their control.
Not Actioning Feedback
Another mistake companies often make with surveys is either not getting actionable feedback, or simply not taking action on feedback that is actionable. I mentioned earlier that you shouldn’t use an NPS® survey tied to a specific interaction because it doesn’t provide actionable feedback – the “why” behind their willingness to recommend. If you’re getting customer feedback that they called multiple times and received different answers about a policy or process and you’re not addressing the root cause of that issue, why bother asking? If you’re asking the NPS® question only and not following up with customers who aren’t promoters to find out what you could do better, how are you going to improve your results?
Customer feedback is critical to understanding how you’re performing and how you can improve. But how you implement a system of feedback is critical to its success.
Net Promoter®, NPS®, NPS Prism®, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Satmetrix Systems, Inc., and Fred Reichheld. Net Promoter Score℠ and Net Promoter System℠ are service marks of Bain & Company, Inc., Satmetrix Systems, Inc., and Fred Reichheld.