Best Practices in Bank Customer


Experience Measurement Design

 

 

Introduction

The question was simple enough… If you owned customer experience measurement for one of your bank clients, what would you do?

Through the years, I developed a point of view of how to best measure the customer experience, and shared it with a number of clients, however, never put it down to writing.

So here it is…

Best practices in bank customer experience measurement use multiple inputs in a coordinated fashion to give managers a 360-degree view of the customer experience. Just like tools in a tool box, different research methodologies have different uses for specific needs. It is not a best practice to use a hammer to drive a screw, nor the butt end of a screwdriver to pound a nail. Each tool is designed for a specific purpose, but used in concert can build a house. The same is true for research tools. Individually they are designed for specific purposes, but used in concert they can help build a more whole and complex structure.


Best Practices in Bank Customer Experience Design from Kinesis CEM, LLC


Generally, Kinēsis believes in measuring the customer experience with three broad classifications of research methodologies, each providing a unique perspective:

Three broad classifications of research methodologies.

  • Customer Feedback – Using customer surveys and other less “scientific” feedback tools (such as comment tools and social media monitoring), managers collect valuable input into customer expectations and impressions of the customer experience.
  • Observation Research – Using performance audits and monitoring tools such as mystery shopping and call monitoring, managers use these tools to gather observations of employee sales and service behaviors.
  • Employee Feedback – Frontline employees are the single most underutilized asset in terms of understanding the customer experience. Frontline employees spend the majority of their time in the company-customer interface and as a result have a unique perspective on the customer experience. They have a good idea about what customers want, how the institution compares to competitors, and how policies, procedures and internal service influence the customer experience.

These research methodologies are employed in concert to build a 360-degree view of the customer experience.

360-degree view of the customer experience.

The key to building a 360-degree view of the customer experience is to understand the bank-customer interface. At the center of the customer experience are the various channels which form the interface between the customer and institution. Together these channels define the brand more than any external messaging. Best in class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customer and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface. As a result, a best practice in understanding this interface is to monitor it directly from each direction.

Customer Side of Bank-Customer Interface

Tools to measure the experience from the customer side of interface include:

Post-Transaction Surveys

Post-transaction surveys provide targeted, event-driven feedback from customers about specific service encounters soon after the interaction occurs. They provide valuable insight into both customer impressions of the customer experience, and if properly designed, insight into customer expectations. This creates a learning feedback loop, where customer expectations can be used to inform service standards measured through mystery shopping. Thus two different research tools can be used to inform each other.

Comments and Feedback

Beyond surveying customers who have recently conducted a service interaction, a best practice is to provide an avenue for customers who want to comment on the experience. Comment tools are not new (in the past they were the good old fashioned comment card), but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction. Additionally, comment tools can be used to inform the post transaction surveys. If common themes develop in customer comments, they can be added to the post-transaction surveys for a more scientific measurement of the issue.

Social Monitoring

Increasingly social media is “the media”; prospective customers assign far more weight to social media then any external messaging. A social listening system that analyzes and responds to social indirect feedback is increasingly becoming essential. As with comment tools, social listening can be used to inform the post transaction surveys.

Bank Side of Bank-Customer Interface

Directing our attention to the bank side of the interface, tools to measure the experience from the bank side of bank-customer interface include:

Mystery Shopping

In today’s increasing connected world, one bad experience could be shared hundreds if not thousands of times over. As in-person delivery models shift to a universal associate model with the branch serving as more of a sales center, monitoring and motivating selling skills is becoming increasingly essential. Mystery shopping is an excellent tool to align sales and service behaviors to the brand. Unlike the various customer feedback tools designed to inform managers about how customers feel about the bank, mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors?

Employee Surveys

Employee surveys often measure employee satisfaction and engagement. However, in terms of understanding the customer experience, a best practice is to move employee surveys beyond employee engagement and to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information. This information comes directly out one side of the customer-employee interface, and provides not only intelligence into the customer experience, but also evaluates the level of support within the organization, solicit recommendations, and compares perceptions by position (frontline vs. management) to identify perceptual gaps which typically exist within organizations.


Customer Surveys

The Customer Side of the Bank-Customer Interface

The Customer Side of the Bank-Customer Interface.

Many banks conduct periodic customer satisfaction research to assess the opinions and experiences of their customer base. While this information can be useful, it tends to be very broad in scope, offering little practical information to the front-line. A best practice is a more targeted, event-driven approach collecting feedback from customers about specific service encounters soon after the interaction occurs.

These surveys can be performed using a variety of data collection methodologies, including e-mail, phone, point-of-sale invite, web intercept, in-person intercept and even US mail. Fielding surveys using e-mail methodology with its immediacy and relatively low cost, offers the most potential for return on investment. Historically, there have been legitimate concerns about the representativeness of sample selection using email. However, as the incidence of email collection of banks increases, there is less concern about sample selection bias.

The process for fielding such surveys is fairly simple. On a daily basis, a data file (in research parlance “sample”) is generated containing the customers who have completed a service interaction across any channel. This data file should be deduped, cleaned against a do not contact list, and cleaned against customers who have been surveyed recently (typically three months depending on the channel). At this point, if you were to send the survey invitations, the bank would quickly exhaust the sample, potentially running out of eligible customers for future surveys. To avoid this, a target of the required number of completed surveys should be set per business unit, and a random selection process employed to select just enough customers to reach this target without surveying every customer.

So what are some of the purposes banks use these surveys for? Generally, they fall into a number of broad categories:

Post-Transaction: Teller and Contact Center

Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

New Account and On-Boarding

New account surveys measure satisfaction with the account opening process, as well as determine the reasons behind new customers' selection of the bank for a new deposit account or loan – providing valuable insight into new customer identification and acquisition.

Closed Account Surveys

Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.


Mystery Shopping

The Behavioral Side of the Bank-Customer Interface

The Behavioral Side of the Bank-Customer Interface.

“You can expect what you inspect.”

This management philosophy is as true today as it was 50 years ago when W. Edwards Deming used it. Mystery shopping is more than a pure measurement technique conducted properly; it is an excellent motivational tool to motivate appropriate sales and service behaviors across all bank delivery channels.

Unlike the various customer feedback tools designed to inform managers about how customers feel about the bank, mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors?

It is the employees who animate the brand, and it is imperative that employee sales and service behaviors be aligned with the brand promise. Actions speak louder than words. Brands spend millions of dollars on external messaging to define an emotional connection with the customer. However, when a customer perceives a disconnect between an employee representing the brand and external messaging, they almost certainly will experience brand ambiguity. The result severely undermines these investments, not only for the customer in question, but their entire social network. In today’s increasingly connected world, one bad experience could be shared hundreds if not thousands of times over. Mystery shopping is an excellent tool to align sales and service behaviors to the brand.

So…what behaviors, channels and employees should be shopped?

Sales channels and sales behaviors offer the most ROI relative to other types of shopping. In terms of prioritizing mystery shopping resources, shops of sales channels and sale behaviors should be the first priority. With the increasing use of universal associates and transforming tellers into sellers, it is incumbent on managers to measure and motivate these higher level sales skills, in both branches and contact centers. After sales behaviors have been prioritized, if resources remain for mystery shopping service scenarios can be included in the mix.

Objective Behaviors

As for the specific measurements, the best practice for mystery shop design is to focus on empirically measureable employee behaviors captured with objective questions. (Was a specific behavior present or not?...Yes or no). The best methodology for deciding which questions to ask is to start with your brand promise, and determine which sales and service behaviors animate the brand. Once you have developed a list of expected behaviors, the next step is to map each behavior to a specific question. Avoid compound questions which ask about two different behaviors, unless you expect both behaviors to be present at the same time, and you are not worried about distinguishing if one is present without the other.

Open-Ends

Open-ended questions, either in narrative form or qualitatively asking what shoppers liked or disliked about the experience, add valuable context for understanding the customer experience. Many clients consider these qualitative observations the heart of the shop.

Subjective Impressions

While the core of the mystery shop is objective measurements of specific behaviors, there is a place for subjective impressions. Rating scales are used to capture shopper impressions of various dimensions of the customer experience, as well as the overall experience itself. These subjective ratings provide valuable context for interpreting the customer experience, and specifically the efficacy of the objective behaviors measured. For example, purchase intent ratings calculate a correlation between the objective behaviors measured and purchase intent, identifying which behaviors may be more important in terms of driving purchase intent, and which investments in training, incentives and rewards have the most potential for ROI.

Finally, given mystery shopping measures employee behaviors against bank service standards, it is a best practice to calibrate and align service standards with customer expectations by constantly feeding information uncovered with the customer surveys back into the service standards and mystery shopping. Such an informed feedback loop between customer surveys and mystery shopping will ensure the behaviors measured are aligned with customer expectations.


Employee Surveys

Leverage Unrecognized Experts in the Customer Experience

Leverage Unrecognized Experts in the Customer Experience.

Frontline customer facing employees (tellers, platform, and contact center agents) are a vastly underutilized resource in terms of understanding the customer experience. They spend the majority of their time in the customer-bank interface, and as a result tend to be unrecognized experts in the customer experience.

An excellent tool to both leverage this frontline experience and identify any perceptual gaps between management and the frontline is to survey all levels of the organization to gather impressions of the customer experience. This survey can be fielded very efficiently with an online survey.

Typically, we start by asking employees to put themselves in the customers’ shoes and to ask how customers would rate their satisfaction with the customer experience, including specific dimensions and attributes of the experience. A key call-to-action element of these surveys tends to be a question asking employees what they think customers most like or dislike about the service delivery.

Next we focus employees on their own experience, asking the extent to which they believe they have all the tools, training, processes, policies, customer information, coaching, staff levels, empowerment, and support of both their immediate supervisor and senior management to deliver on the company’s service promise. Call-to-action elements can be designed into this portion of the research by asking what, in their experience, leads to customer frustration or disappointment, and soliciting suggestions for improvement. Perhaps most interesting, we ask what are some of the strategies the employee uses to make customers happy. This is an excellent source for identifying best practices and potential coaches.

Finally, comparing results across the organization identifies any perceptual gaps between the frontline and management. This can be a very illuminating activity.


Social Listening

Filling in the White Spaces

Filling in the White Spaces.

Increasingly social media is “the media”; prospective customers assign far more weight to conversations on social media then any external messaging. A social listening system that analyzes social media conversations, while still a little immature, are increasingly becoming a valuable source of customer comments.

Social media analytics software collect data across multiple sources (Facebook, Twitter, Google+, etc) using text analytics in an effort to reveal patterns, identify trends and detect potential business problems from what people are saying in these online forums.

While these analytical tools are still a little immature, sentiment analysis technology has become more capable in recent years. Among the common features of these tools is sentiment tracking of conversations, determining if the sentiment is positive or negative and tracking a ratio of sentiment over time. Additionally, these tools typically mine text for specific key words. Beyond automated analytics, we’ve had success using this unstructured social feedback and reducing it into quantifiable themes through a manual process of coding, where comments are read and grouped by theme. While a manual process, we’ve found taking a sampling of social conversations and manually reviewing them, provides valuable context not available through pure automated analytics.

Like comment monitoring, social listening is not a standalone research tool. It is not a survey, nor is the data collected from a representative sampling of customers, as such, it is not statistically valid. Social listening, however, fills in the white spaces between other research tools. Its value lies in correlating social data with other data sources.

Research without call to action elements may be interesting, but not very useful. As with all research tools, call to action elements should be built into a social listening program. Any time there is negative criticism, it presents an opportunity for process improvement. Among the ways managers can act of social listening are trend identification, finding chronic customer complaints, and identifying and correcting root causes of customer complaints.

Additionally, managers should construct processes to identify and respond to social conversations where appropriate. Customers who have had a problem fixed are famous for becoming vocal advocates of a company. The flip-side is that customers who have had a positive experience can be thanked for their feedback, which encourages customer loyalty. Try to respond to each review (positive or negative), thank the client for their feedback constructively and professionally, address the issue, and offer solutions to correct the issue and leave it at that.

Finally, the unsolicited nature of social conversations offer a unique opportunity to feed themes identified in these conversations back into customer survey design, allowing managers to determine if issues uncovered are broadly present across all customers.


Customer Comments

A New Look at Comment Cards

A New Look at Comment Cards.

Customer comment tools provide financial institutions a valuable tool to identify and reply to customers who have had a negative service experience and may be at risk for attrition or spreading negative word of mouth.

Beyond randomly surveying customers who have recently conducted a service interaction at a branch or call center, banks should also provide an avenue for self-selected customer feedback, feedback from customers who have not been selected to participate in a survey, but want to comment on the experience.

In the past, this vehicle for collecting this unsolicited feedback would be the good old fashioned comment card. Today, the Internet offers a much more efficient means of collecting this feedback. For the branch channel, invitations to provide feedback with a URL to an online comment form can be printed on transaction receipts. For call centers, customers can be directed to IVR systems to capture voice feedback from customers. Website and mobile users can be offered online comment forms as well.

Unsolicited feedback tools are not surveys, and should not be used as surveys. In fact, they make terrible customer satisfaction surveys. Many institutions try to turn them into surveys by asking customers to rate such things as service, convenience and product selection. But these comment channels do not give reliable information because they do not come from typical customers. The people who fill out the cards tend to fall into one of four groups: extremely happy customers, extremely unhappy customers, extremely bored customers, and customers with requests (for products, new store locations, etc.).

The people who fill out the cards tend to fall into one of four groups: extremely happy customers, extremely unhappy customers, extremely bored customers, and customers with requests .

Notice the operative word in the first three categories: extreme. If a customer is satisfied with the product or service, why bother to give feedback? Customers expect to be satisfied. Having your expectations met is not something to write about. In research parlance, the sample is self-selected, and the people who provide such feedback are not likely to be representative of the general population of customers. It therefore makes no sense to ask these people to provide ratings that are going to be tabulated and averaged. The results will be useless at best and completely misleading at worst.

A better approach is to design them as letters to the bank president. They look something like the following template:

Dear [President's name]: Here is something I would like you to know . . .[Lots of white space] Sincerely yours, [Space for name, address and phone number].

Additionally, the check box can be included asking the customer if they would like someone to contact them as a result of their feedback.

This type of feedback tool will deliver valuable qualitative data about the experience that prompted the customer to provide the feedback.

It is essential that a system for analyzing and responding to the feedback be put into place. First, sort the comments according to if the customer wants a reply to their feedback. There are ways to streamline this process, but to ignore it is to make matters worse, because customers (the angry ones, at least) will expect a reply. On the other hand, responding to customer concerns makes comment tools exceptionally valuable. First, they provide a method to identify and reply to customers who have had a negative service experience and may be at risk for attrition or undermine the brand with negative word of mouth, and even worse social media commentary. Second, they Minimize negative word-of-mouth advertising that would undermine marketing efforts; and increase positive word-of-mouth advertising (customers who have had a problem fixed are famous for becoming vocal advocates of a company). The flip-side is that customers who have had a positive experience can be thanked for their feedback, which encourages customer loyalty.

The next step in acting on the qualitative feedback is to reduce it into quantifiable themes through the process of coding, where comments are grouped by theme. For instance, 18% of comments may have referred to “slow service” and 14% to “lack of job knowledge”. Now, we can monitor the frequency of various themes by business unit and over time.

Comment tools are not new, but with modern technology can be employed as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.

Finally, the unsolicited nature of customer comments offer a unique opportunity to feed themes identified in customer comments back into customer survey design, allowing managers to determine if issues uncovered are broadly present across all customers.


Conclusion

Best practices in bank customer experience measurement use a variety of research tools in a coordinated fashion to give managers a view of the customer experience from all sides of the bank-customer interface. Customer surveys, mystery shopping, employee surveys, social monitoring, and comment tool used in concert gives managers a complete view of this bank-customer interface and the customer experience.

 

 

 

 


Eric Larse is co-founder of Seattle-based Kinesis, which helps companies plan and execute their customer experience strategies. Mr. Larse can be reached at elarse@kinesis-cem.com.

 

Best practices in bank customer experience measurement use a variety of research tools in a coordinated fashion to give managers a view of the customer experience from all sides of the bank-customer interface.