May 6, 2024
Three ways to ensure quality assurance (QA) results align with customer satisfaction (CSAT) and issue resolution:
- Redesign your quality form to focus on key drivers
- Measure three quality metrics vs. one overall score
- Evaluate interactions from the customer’s perspective
A quality assurance program offers one of the best windows into the customer experience. Designed effectively, quality can predict CSAT, providing actionable data that leads to significant and sustained improvements.
Yet, quality scores and CSAT results often tell different stories. The misalignment occurs when organizations don’t fully understand or measure true drivers of satisfaction through the quality program.
Below are a few straightforward approaches to realigning quality assurance with customer satisfaction.
1. Redesign Your Quality Form to Focus on Key Drivers
An effective quality program starts with the form. To accurately assess the customer’s experience, the quality form should reflect what matters most to the customer. Often quality forms are designed based on good intentions and varied opinions. Over time, however, they evolve and end up with attributes that are not critical to the customer.
How do you design a quality form to capture the real drivers of satisfaction?
Organizations sometimes fail to consider the voice of the customer while designing quality forms. Avoid this common pitfall by conducting a key driver survey to understand what customers care about most. This survey asks customers detailed questions about their experience.
From there, perform a multiple regression analysis to quantify the attributes that drive satisfaction. The analysis will provide specific customer-critical attributes to include in your quality form.
A quality form that captures what matters most to your customers will help you align QA scores and CSAT results.
White Paper
How to Design Quality to Link to Customer Satisfaction
2. Measure Three Quality Metrics vs. One Overall Score
Using an overall score to measure quality often produces inflated results and doesn’t provide an accurate view of performance. Using a single score can also be deceptive to leaders who do not take action because the organization appears to be performing well.
Measuring quality from the customer, compliance and business perspectives gives you a more accurate view of performance. Combining attributes, such as resolving the customer’s issue with logging calls correctly or following privacy regulations, doesn’t make sense because it tempers the score.
For precise performance insight, measure these three components of quality:
- Customer Critical Accuracy measures quality from the customer’s perspective.
- Issue resolution and clear communication are two examples of customer critical attributes that significantly impact customer experience.
- Compliance Critical Accuracy measures how often compliance requirements are met in relation to regulatory and legal requirements. Errors in this area could cause company liability.
- Disclosing sensitive information without the proper identification is an example of a compliance error.
- Business Critical Accuracy measures how well the agent followed critical business processes. These could be attributes that customers don’t care about but could result in unnecessary costs and possible revenue loss.
- Accurately logging calls or attempting to close a sale are two examples.
*Out of respect for client confidentiality, we’ll refer to the organization as Centera.
Our client, Centera, initially reported an 86% overall quality score, appearing to perform at a high level. However, the results told a different story when we segmented the overall score into the three metrics mentioned above. Here’s the breakdown:
- Customer Critical Accuracy: 60% of transactions had no customer-critical errors.
- Business Critical Accuracy: 70% of transactions had no business-critical errors.
- Compliance Critical Accuracy: 100% of transactions had no legal or compliance errors.
Averaging these distinct aspects of QA into a simplistic number was masking Centera’s actual performance. In this case, the customer critical accuracy result of 60%, rather than 86%, was a more accurate reflection of the customer’s experience.
3. Evaluate Interactions from the Customer’s Perspective
A primary reason for QA and CSAT misalignment is that evaluators often assess from the perspective of an organization or agent instead of the customer. For instance, assessors often consider the transaction a pass when an agent does everything right but can’t provide a solution due to policy. In contrast, the customer would likely think the transaction failed since the agent couldn’t resolve their issue. The customer will probably indicate their dissatisfaction if asked about their experience in a survey.
It would fail if the transaction were correctly scored from the customer’s perspective. Most importantly, the quality process would capture the specific reason for not resolving the issue. In this case, the failure is because the policy prevents resolution.
This scenario demonstrates how a disconnect between what an organization and its customers consider “passing” can lead to misalignment of quality and CSAT results.
These are just three methods of ensuring quality and CSAT correlation. Sampling approaches, calibration, and overall program design also play critical roles. We’ll discuss those aspects of QA in subsequent content.