Scoring models allocate lead records a number of points based on the interactions they have made with your organization. While these have been long-standing methods, scoring leads is not a simple task for any CRM admin or marketer – especially when you want to do this at scale. With thousands of records in your database, no one in your organization is going to spend time sifting through every interaction they’ve made.
In this data-obsessed world, one can’t help but question whether lead scoring has lagged behind. For one, lead scores can become easily inflated – even completely out of control – with numbers spiraling into the thousands. You can face fraught conversations with other teams, who are confused when seeking to interpret the results. Then, there are decisions you need to make, such as should you implement score decay – and if so, how?
Often, monitoring lead scoring throws up multiple trade-offs you need to make – after all, one size does not fit all. Diagnosing an issue, which sub-set of leads it applies to, and the best course of action to remedy, is incredibly time-consuming.
If you’re questioning whether lead scoring works for you and your organization, there are considerations to take. This guide is intended to share the characteristics of alternative lead scoring methods out there, in a Salesforce context. First, we will dive into the challenges of the more traditional methods.
Issues With Current Scoring Methods
As mentioned, there are (what we will refer to as) traditional scoring methods. These are rules-based, where the marketer/admin will specify a number of points that is assigned to a specific activity. Not intending to pick on Account Engagement (formerly Pardot), below is an example of a scoring model where the number of points awarded are directly related to the activity:
Recognizing the limitations of rules-based scoring models, SaaS vendors have been working on methods that ditch the “if this, then that” scoring, and moving towards taking the whole prospect’s series of touchpoints into account “if this, and this, and this, then that”. Rather than using absolute numbers, using a relative score makes one lead’s score contextual to other leads’ scores. For example, if the maximum score is 100, these models will use a scale of 0-100, and place a lead record in relation to the rest of the database.
The issue with these methods is that you’re relying on whatever algorithm is at work in the background – in other words, there’s little to no way to guide or influence the software’s calculations. There are numerous terms for this (which we won’t go into) – essentially, you’re receding trust without being able to inspect or validate what the algorithm is up to.
How to Structure a Scoring Model
We can pause on the highly-automated front for now. The basis of any scoring model is to ask “what is a meaningful touchpoint” to my organization?
- At a basic level, it’s categorized by activity type – for example, a form completion is more impactful than viewing a page on your website.
- Going further, it’s by asset – if form completions are deemed more impactful, you need to consider, however, that not all forms are created equal. Some forms are going to be more valuable indicators of interest than others.
So, these are how leads increase their scores. As mentioned, this can lead to scores running into the thousands, with no easy way to sensibly interpret or adjust them.
Score Decay
When lead scores do spiral out of control, how you plan to bring them back into a workable state is not an exact science. Score decay is the practice of reducing a lead’s score with rules-based automation. The plan that you could propose to others in your organization depends on the number of campaigns you send out weekly/monthly (opportunities to engage), and the number of touchpoints you expect a normal lead to go through before converting/becoming qualified.
Timing is clearly an important factor that’s often not accessible to an admin. This time factor is termed recency. If someone has engaged recently, then that score addition ideally wouldn’t be wiped off the record; however, a hard-hitting activity that occurred two years ago likely doesn’t have the same relevancy.
Again, you’re risking ‘painting everyone with the same brush’ – whether that’s score additions in bulk, or reducing scores in bulk.
What Does ‘Advanced’ Mean?
So, what could we be doing better when scoring leads? Aside from addressing the challenges that we’ve encountered this far, an important underlying requirement is to bring together both marketing (including demand generation) and sales teams – all without the confusion or finger-pointing blame.
Here are four characteristics that you should consider when looking to improve your lead scoring:
1. Include Activities From Your Salesforce Instance
Does it have to be the way that marketing owns the lead score? Sales should also have a sway in how the score is calculated. When a lead is passed to the first sales point of contact (e.g. a business development rep.) the outcome of calls that they make and the meetings that they hold, would make the score more contextual.
Marketers often say that feedback from sales can be minimal – a reduced lead score based on a disqualified lead, call not answered, could be valuable intel for who the marketing team shouldn’t be targeting, or where they shouldn’t be wasting their budget.
2. Be Platform-Agnostic
Some organizations use more than one marketing automation tool. For example, some organizations use both Marketing Cloud and Account Engagement (formerly Pardot).
Then we should consider the potential data that other integrated platforms – whether Salesforce-native/AppExchange products, such as CloudV, or external via integrations. This could include engagement data such as online account logins, survey responses, mobile app access, in-store visits, or eCommerce transactions.
3. Assign Weightings to Touchpoints
Treating scores per activity type as percentages (versus absolute numbers) means that you have more control over the relative value of one activity compared to others.
In a model where scores use absolute points, scores can inflate rapidly. One person in your team could perceive an activity type more valuable than what you may do, causing skewed total score that lose their meaning.
In the image below, you can see how activity types are assigned a relative score (as a percentage) – and moreover, the individual percentages need to equate to 100%. This aids communication between teams, and is a clear interface that you can leverage to have those discussions.
4. Take Recency into Account
As mentioned, recency is an important factor to take into consideration that’s often not accessible to an admin.
You have a total score for a specific lead, however, what’s the trend? Was this lead mildly interested in your brand for several months before spiking in activity? Or, conversely, has their activity declined recently?
Timestamped engagement activities could be mapped month by month to show a comparable increase or decrease – as opposed to a number that has changed over an indefinite amount of time (since the first activity date). This adds even more context to how relevant your brand is to that lead.
Summary
While lead scoring has been tried and tested for over a decade, one can’t help but question whether certain methods have lagged behind. Leads can be ‘slow burners’, taking time to qualify/convert – or, a previous dormant lead could suddenly re-engage.
The key point is recency – and seeing how score changes trend over time. Analytics, by way of a chart, can bring a whole lot more useful context to sales teams as they work through their leads.
While this could be possible with a significant amount of configuration and maintenance, looking for a pre-made option is always easier.
CloudV offers a Lead Scoring solution for Salesforce based on any marketing or sales touchpoints, with recency taken into consideration, and configurable touchpoint type weighting.
The solution is compatible with Salesforce’s Sales, Service, Marketing and Commerce Clouds.
To learn more about CloudV, book a demo now!