The power of Salesforce lies in the ability to customize an org, much of which can be done quickly, by an admin or consultant using declarative (point-and-click) configuration. This boils down to one fact: adding and modifying metadata is easy to do, but get it wrong and face serious consequences. And deleting metadata is an even scarier thought.
The way to increase the speed changes can be made while mitigating risk, is to have a well understood implementation lifecycle. Manage changes to the org metadata rigorously, or lose control of the org.
In terms of taking care of org metadata, how well are other Salesforce customers doing? We have analyzed 1+ billion metadata items and this guide will share 10 learnings that the Elements Product Management team discovered. Hopefully, these findings will get you thinking more seriously about impact analysis.
1. Salesforce Metadata Items: The Numbers
The number of metadata items in orgs is staggering.
Here are the highest numbers of metadata items we have seen: 250,000 reports, 54,146 email templates, 2,000 custom objects, 20,000 custom fields, 13,000 dashboards, 50 managed packages, 18 million cases, 1.4x storage limit.
There was an org with 10 million task records, which we thought was jaw-dropping, only to be trumped the week after when we heard about an org with 114 million task records!
We haven’t analyzed Org62 (Salesforce’s own instance) but we’ve heard that they can beat these numbers!
2. Technical Debt Comes With Age
Any org over 5 years old has staggering levels of technical debt. This means that changes take longer and run the risk of breaking the org. This kills the agility of the business, and is particularly relevant now as organizations are accelerating their digital transformation.
Increasingly the answer is to “throw all that investment away and start again”. The answer should be to continuously work at reducing technical debt – every day, every sprint, every release.
3. Custom Field Limits
Top 5 standard objects (Lead, Account, Contact, Opportunity and Case) often hit custom field limits (500 or 800, depending on the licensing tier).
This is because teams fear that deleting a field will break the org. The common practice is to “hide” or “deactivate” a field – but never actually delete it.
We’ve seen examples of orgs with extra custom objects – Opportunity1 and Opportunity2 – for the extra fields alongside the Opportunity standard object. This is architecturally a nightmare for security, reporting, and forecasting.
4. Metadata API Quirks
The Metadata API can be really quirky. It pulls all the metadata from your org, but you need to balance the amount of data you ask for in order to minimize the API calls (to prevent it from timing out).
There are times when you ask the Metadata API for metadata and it gives you some, then often ignores you. Make the call again, and it will return the data. Our sync has had a ton of code added to make sure that we are getting all the data, catching the timeouts, and not blowing up customer’s API limits.
5. Dependency API Limitations
The Dependency API is amazing and does the heavy lifting for some of the metadata items. It tells you where metadata items are used.
We combined the Dependency API with thousands of lines of custom code to be able to build a more complete where-used and dependency picture (covering every metadata item and past the limit of 2000 results). The picture this paints is shocking.
6. Processing Power for Org Impact Analysis
Don’t underestimate the processing power required to create the org impact analysis. That means pulling all the metadata and understanding the interdependencies and importance of the items.
Also, it needs to be reanalyzed every time any time anything in your org changes because the analysis is only valuable if it is up to date.
This is something admins, and others who have a responsibility over their Salesforce org, need to be aware of.
We are syncing and analyzing customer orgs (both production and sandboxes) every night, which results in over 500,000 API calls per day. We do the sync and analysis overnight on AWS so we don’t trash the org’s API limits.
Some analysis is done on a request basis, such as, which users have access to fields via profiles and permission sets. I should mention, that one request for a particular client returns 2 billion results for just one object.
7. Managed Packages Get ‘Sticky’
This is great news for ISVs – but not for customers. A lack of visibility around usage and dependencies of managed package metadata means that customers are very cautious about uninstalling managed packages. Managed packages install fields on standard objects and add them to page layouts. A managed package can be upgraded and suddenly new fields appear in the org and on page layouts unannounced.
In-depth org analysis is required to surface managed package dependencies so that they can be deleted with confidence. Be really ruthless about installing managed packages and remove them after the trial ends if they are not going to be used. We hear stories of Trailhead managed packages installed into production!!
8. Record Types Are Like Glitter
Record types are really powerful. They are like glitter. They are so exciting and shiny, but once you have them, they are almost impossible to get rid of – you’ll instantly regret having so many.
Why? Similar to a field, record types can be referenced by so many metadata types. If you remove them without checking everywhere they are used then you risk breaking the org. So you don’t or can’t delete them.
The DependencyAPI does not analyze record types, so it requires thousands of lines of code to search for them in every metadata item. You need to search for record types, not just by API name, but also by label name or ID because any of these could be the referenced used.
Every metadata type needs a different query to be written, which means you need to write 42 different queries to find record types in Flows!
Without this analysis you are stuck with unused record types cluttering up the org – just like the glitter you find, weeks after the party happened, that the hoover hasn’t managed to catch.
9. The State of Documentation
Documentation of “why” you made a change is critical institutional knowledge that will make future impact assessment easier. It is ‘paying it forward’.
The levels of documentation, however, are very poor, even at the most basic level (eg. completing description fields). Did you notice that the standard object metadata and Salesforce managed packages have very few description fields filled out? Why are description fields not mandatory?!
With the power of Dynamic Forms in Winter ‘21 it is even more important to document how they are configured for future analysis.
10. Long Release Cycles = Poor Agility and Engagement
It’s a familiar story: the business demands more frequent changes but release cycles are often over a month away because every change goes through a rigorous development cycle, with little or no impact analysis.
Our own org is complex with integrations to external systems but we are able to push 100 change requests through each month. 50% are delivered in a day, straight into production, because we are super confident that the risk is low. How? We have the analysis and documentation to back it up.
Faster releases drive huge business agility and also engage end users who feel their requests are being addressed.
Final Thoughts
Hopefully, these findings got you thinking more seriously about impact analysis. Though the power of Salesforce lies in the ability to customize an org, it can also be the downfall of an org. Make sure to stay on top of you metadata and save yourself from the headaches!
Comments: