When it comes to measuring their customers’ satisfaction, too many companies have settled into a comfortable rut of changing their approaches to get the results they want. It’s like buying a treadmill with the mile counter sped up; if we all could perform at such accelerated levels, we’d all be Olympic-class sprinters and milers. Ironically the more critical renewal business is in a company, the greater the emphasis on inflating customer satisfaction metrics, and the greater the tendency to design research programs that deliver results expected. This doesn’t happen overnight; it is a gradual slide into the bad habit of designing research to get the results you want.
Worst Practices in Measuring Customer Satisfaction
Break the bad habits of measuring customer satisfaction to get the results your company wants by going after these myths. If I were in a gym, it would be time to climb on the old treadmill and go after another mile — at my real (really slow) pace.
- Selecting the most satisfied customers for customer surveys. This happens more than anyone would care to admit. The local Toyota dealer did its customer satisfaction surveys after our family bought our second car from them; another time I got a customer satisfaction survey after they gave me a loaner car when my service bill was below US$100. Bribery? Perhaps. But bad sampling definitely. You get the point. Be sure to ask all customers for their feedback. And for goodness sake, don’t limit customer satisfaction surveys to the customers who have renewed with you already; that’s equivalent to stuffing the ballot box.
- Over-sampling to compensate for small sample sizes. This can also skew customer satisfaction results by giving you again only the most active and, therefore, most positive customers you have. Re-think the frequency of your measurements and the events that are driving them. If you have events that only bring out the most satisfied customers then be sure to mention that in the analysis; don’t misrepresent the most glowing customers’ data sets as valid across your entire customer base.
- “The monkeys are running the zoo” data collection approach. When a department’s bonus plan is based on customer satisfaction measures, they should not be running the research program to get that data. That’s equivalent to letting monkeys run a zoo. The temptation to inflate results is just too great when this happens — get either an outside firm to do it or have a department with no stake in the research results complete the surveys and analyze them.
- A culture that hates tough questions because they see customer affection as equity. So many manufacturing companies — and most definitely software vendors — have this broad impression their customers like them and that affection is equity. These same companies hate the tough questions about why customer satisfaction isn’t translating into greater sales. Questions like “If our customers love us so much why aren’t they adding our products in other areas?” Cultures that kill these questions need to re-think their focus. Willie Loman’s lament in Arthur Miller’s play “Death of a Salesman” sums it all up: “People buy from me because they like me…” That’s rubbish — people buy because they see value.
- Questions that Little Caesars would be ashamed to use. Credit the pizza provider Little Caesar’s with some innovative commercials — including the one with the taste test where the white jacket-clad researcher asks if the respondents would like “more cheese or less cheese.” In many customer satisfaction surveys, the questions are all about what’s right with companies — there’s maybe a half-inch tall box for “suggestions for improvement.” It reminds one of the joke notepads that say “write your complaint here” and there is a small red box one inch tall shown — but linguistically many companies have made this a best practice — the best practice of shielding their management and themselves of the truth of their customers.
Customers, Unplugged
You have to wonder where all that customer satisfaction data goes once collected. Earlier in my career I worked for a small software company and took over the job of research manager. One of my first tasks was to take the warranty cards that also included customer satisfaction surveys and analyze them, as engineering was asking for customer feedback on existing revisions — they wanted to know what to improve on.
Our CEO took me to the closet next to his office and introduced me to the warranty cards — all five moving boxes full of them — dating from five years ago. I asked if they had been looked at, and did they get input into our CRM system? He said he would read them when they came in, and if they were happy, they went into the moving boxes; if mad, he would have customer support call them — and if they were steaming — he would call them and give them a break on maintenance costs. Our customers were unplugged.
I bring this up because the cardinal sin of customer satisfaction surveys is that they literally end up in the closets, abandoned cubicles, long-term storage, or just thrown out — and the internal teams that really need them — engineering, support, and, most of all, sales — are denied this data.
To net out of the story — I created a database and hired an intern to key in the results. We had the five boxes input in six weeks. The reports showed that one in 20 satisfied customers bought an upgrade within three years (that’s a 5 percent upsell rate); those unsatisfied customers had returned their products for refunds — and the hottest customers had either left or were still milking the CEO for maintenance deals.
Analyst, Heal Thyself
The irony of an analyst firm sponsoring a research survey to publicly show the satisfaction levels of the most critical customer base to analyst firms — technology buyers and users — has with their offerings generated quite a bit of debate in e-mail this week. The Azul Partners study — rife with questions around its methodology — was the catalyst.
First, a disclaimer: I hold in high esteem several analysts at Aberdeen including Jim Brown, John Fontanella and Paula Rosenblum — research professionals I have worked with on user’s IT challenges and strategies. They are all very solid analysts. Second, thanks, Tekrati, for your insightful analysis of the Azul report. Take a minute and read Tekrati’s analysis of the Azul report here.
Now let’s take the report, The Real Deal: Vendors and End-User Perceptions of Industry Analyst Firms.
- Free trial to Aberdeen Access as an incentive. This is troubling. The fact that one in three took it says that respondents may have felt compelled to give glowing reviews to Aberdeen to make sure they got this.
- Show us your respondents. I couldn’t find a listing of the companies contacted, and the description of respondents in this report doesn’t really indicate how many companies there are included. Red flag: Aberdeen provided the respondents out of their database — were they cherry-picked or random? We don’t know.
- Questionnaire needs to be shown. Like the Little Caesar’s example above, how can anyone be sure the questions were unbiased? No one knows — but solid research always provides the questionnaire as an appendix.
- “…no firm or individual had undertaken a primary research study to look at how vendors and end users perceive analyst firms in today’s market.” Azul Partners, please Google the following companies: Analyst Strategy Group; Lighthouse Analyst Relations; Outsell; Photizogroup. Each has a current study exactly on this topic with results varying from the Azul study.
One can see that an analyst firm sponsoring a survey that shows them at the top of the heap when it comes to the most strategic segment of all — users of technology — is incredibly ironic. But it’s time for the analyst to heal itself; if Azul is really sincere, they need to combine their research with the firms mentioned above and show the variations in end user customer satisfaction.
Bottom line: It’s time to go and shoot some sacred cows of customer affection being equity and get serious about really making customer satisfaction scores jump. Measuring like you mean it instead of how the powers that be want customer satisfaction measures to read saves the future for growth instead of confusion.
To read a response from Azul Partners click here.
Louis Columbus, a CRM Buyer columnist, is a former senior analyst with AMR Research. He recently completed the book Getting Results from Your Analyst Relations Strategies, which is available on Amazon.com.