How can you measure CRM data quality? What are the ‘go-to’ ways to prevent poor data quality, and fix the issues once they appear?
If you are like most people, you would have struggled to answer these open-ended questions.
‘Data quality’ cannot be captured into one, single metric – not only would that be an impossible task, but foolish too!
We’ve all heard the warnings. If mismanaged, poor data quality becomes an insidious force, pushing the ‘Three Inseparable Principles’ off-balance. The result? We disrupt both productivity and user adoption (the other two principles) before we have had time to isolate and manage the causes of bad data!
There are burning questions around Salesforce and data quality that we set out to answer. Using responses that we collected from around 300 organizations, plus real-life analysis of our own DemandTools user base, we’re keeping an eye on the changing trends and challenges.
Personally, I genuinely look forward to analyzing the survey data that we collect each year, hypothesizing possible factors, and then sharing the insights. Now it’s time for me to share my favorite insights from our ‘State of CRM Data Management 2020’ report.
This post has been adapted from the webinar “Acquiring and Retaining Customers – A Shared Responsibility for Growth”, hosted by Validity and featuring the SalesforceBen team. Watch the on-demand recording here.
The Key Takeaway?
- An alarming 44% of respondents estimated a loss in revenue as a result of poor quality CRM data.
- The loss in revenue ranges from 5%-20% of total revenue.
- 42% don’t know either way!
Placing a revenue amount on this insidious situation prompts us to ask: ‘are we taking data quality seriously?’
The Most Common CRM Data Quality Issues?
Coming to the realization that your data issues are costing you in terms of productivity, user adoption (and ultimately, revenue) may leave you feeling overwhelmed. Your challenges are, most likely, not unique!
Issues with CRM data quality is practically universal. We found that 95% of the survey respondents reported having at least some CRM data quality issues. What about the other 5%? Maybe they are in denial, but that’s not for us to say!
The most common data quality issues impact CRM effectiveness is missing or incomplete data (69%). Half of the respondents admitted to struggling with duplicate data and incorrect data, followed by expired data (41%).
How about you? Are any of these issues familiar?
Actions Towards CRM Data Quality?
We asked which steps organizations were taking to maintain data quality in 2020.
We grouped these into broad approaches that covered both preventative measures, such as ‘cleaning data before importing’, and remedying to keep issues at bay, such as ‘manually identify and correct data quality issues’.
If you look at this chart, what we learned was that over 90% of the study participants are taking at least some steps to improve their CRM data quality. Yes, many organizations do have their custom processes, but these other measures shared in common, are encouraging.
We can’t ignore the top entry, however. It’s concerning that the most common approach to CRM data quality is a manual one!
While these methods can deliver great results now, how will they stand up further down the line? These days, data volumes and the velocity that organizations need to reach means that manual approaches simply can’t scale.
Maybe you also know of some manual approaches going on in your CRM, not only ‘under the radar’ but established as the way to prevent and remedy poor data?
Most Used Data Quality Tools
We’ve set the scene on the challenges and the costs they are posing on our organizations. We thought it would be interesting to carry out some real-life analysis.
Usage data from our own DemandTools products is perfect for analyzing actual data management practices and comparing it to what was self-reported in the survey. Treat this as a peek into what other data teams are getting up to!
Note: if you are not familiar with our DemandTools modules, I’ve provided an overview at the end of this post.
So, let’s ‘map’ the issues we’ve identified so far onto the first analysis: our most used modules.
Ranking our most used modules showed clear winners:
- MassEffect: most heavily utilized, which provides built-in data standardization for importing and exporting, updating, upserting, and deletion/undeletion of records.
- MassImpact: which enables users to modify and update thousands of existing records,
- Single Table Dedupe: which finds and merges duplicate records in standard and custom objects.
Data standardization, mass updating, duplicate merging – we can say that the challenges we outlined previously are mirrored in the modules our users are inclined towards.
Data Quality 24/7 – Data Doesn’t Take the Weekend Off!
Next up, the usage of our modules by the day of the week.
As you can see, there’s a stark difference in activity levels between Monday to Friday, versus the weekends – chalk and cheese!
We found this striking because data doesn’t take the weekend off! In fact, for many businesses, leads will be coming in on Saturdays and Sundays. No weekend data cleansing for your inbound leads means that when your sales team comes in nice and early on Monday morning to start selling, they pick up these new leads before your Admin has had a chance to clean them – let’s be honest, bad things will happen.
There’s a strong argument in support of more automation so that data quality becomes a true 24-7 process.
The Geography of Data Quality – Usage by Region
After pulling the DemandTools modules usage by region, we thought about why there may be differences in which modules are being used.
We concluded that it’s down to different regional needs for data standardization.
Take North America, for example, where organizations have to deal with multiple postcode formats. In Europe, you will find a broad range of phone numbers formats; you’re talking about a large number of different countries and languages, which means upwards of 50 different formats. Whereas in the APAC region, Australia and New Zealand in particular, are comparatively more homogenous in terms of data formatting.
Why do the UK and European organizations use Lead Conversion more heavily? Coming back to what I said above, when leads are coming from multiple territories, Admins have to cope with multiple languages, address formats, character sets, and so on. Standardization becomes a more pressing requirement – and therefore, adds complexity to the lead conversion process.
Why do North American organizations rely on Re-Assign Ownership? As a bigger market, it’s typical that organizations are prospecting into a greater number of accounts. So, my first thought is that if a set of prospect or customer accounts need re-assigning, it’s likely to be a larger amount of data to update. Secondly, perhaps people changing jobs more frequently means organizations rely on more efficient re-assignment methods.
How Effective are Your Data Maintenance Methods?
Here’s another callout finding:
One third of respondents either have no CRM data management process, or they report it as being ineffective.
Let’s talk about the chart below. We’re looking at the established relationship between ongoing data management practices and the effectiveness of those. No surprises here: as data management becomes an ongoing and effective process, there’s a positive shift in CRM data quality.
Who Takes Responsibility for Data Management?
So finally, we wanted to look at who’s involved in CRM data management.
We use ‘The Three Inseparable Principles’ to also explain that data management is a shared responsibility. User adoption is instrumental if you are looking to succeed with data quality, yet getting users around the organization to actually use Salesforce is a hurdle. Getting users involved in maintaining data quality can do wonders for fostering trust in CRM data.
So back to the study. We asked: ‘who holds responsibility for managing CRM data?’
Where CRM data management is a full-time responsibility – either of a single person, a dedicated department, or a cross-functional team – these organizations are 2x as likely to find themselves in the ‘good’ or ‘very good’ data quality category.
The research shows that the cross-functional team approach aligns with the highest levels of data quality. Data quality should be a shared responsibility – case closed!
On the flip side, organizations with no one taking responsibility are 5x more likely to fall into the ‘poor’ data quality category!
Who is Doing What – Are Admins Sharing the Data Management Burden?
We’ve just drummed it into you that a cross-functional team is most effective for data management. We can compare which data management approaches are being shared with multiple people across the organization, or if the Admin is shouldering the bulk of the workload.
Looking back to our real-life DemandTools usage data, we plotted module usage on a scale to show which modules are very much ‘admin only’ (or used by a single user) compared to those used more widely by various teams.
I think the challenge for companies going forward will be to move the usage of these modules closer to the right end of the scale.
To end on a positive note, we have noticed two things over the past few years:
- A rising awareness around data quality outside of ‘traditional’ data teams,
- The use of our tools broadened across the organization.
At Validity, we repackaged our products (including DemandTools) around 18 months ago to better reflect the needs of the market. And it looks like we were spot on! We’re seeing much more collaboration on data management now between admins and their users than we did before, and these repackaged modules help organizations ‘cover all bases’ in the fight against poor data quality.
DemandTools Modules – Overview
Here’s an overview of the DemandTools modules, to set the picture in terms of how they are categorized as cleansing modules, maintenance modules, and discovery modules. This provides the much-needed context to the points we are raising, using our own research to back up self-reported survey results.