Originally written by Esteve Graells on Forcegraells.
If we look at the Query Optimization topic in Salesforce, we uncover one of the most obvious problems of any project: the performance degradation of the elements that query data in Salesforce.
That is to say, as our Org grows, the queries that we initially designed (which may be optimized, or maybe not) may suffer degradation. But, I’m not just talking about Developers’ queries with SOQL – I’m talking about List Views, Reports and Dashboards that users are using to run queries. In addition, this degradation happens silently, unnoticed, and is only visible with performance incidents in Production, when it is already too late! At this stage, corrective actions (index creation, data reorganization, etc.) will not be immediate.
To ensure you can obtain this information simply, I have created a tool named Query Performance Detector.
This tool allows you to check how Salesforce Optimizer is evaluating ListViews, Reports and Dashboards:
- Its cost
- The number of rows returned
- The access plan
- With the data present in the Org
It highlights those that have a non-selective cost, so that you optimize them as a priority, especially if they have a very high volume of records. They are your most important optimization objective, because they are the ones that users have used.
With this information we can:
- Detect queries that are not efficient, and make them efficient immediately, because if the volume is high, therefore the cost will be too.
- Consult the performance history, and evaluate those that are getting worse to start the optimization process.
- Obtain the list of Queries SOQL, to know where we should focus our optimization efforts.
Therefore, our optimization efforts will be immediately apparent from your users’ point of view. In the remainder of this post, I will describe the tool’s characteristics and components.
The main technical characteristics of the Query Performance Detector are:
- 100% Apex and VisualForce (I still need to learn more about Lightning!)
- Creates 1 object to save the results history
- All the calculation processes are Batchable so that their execution does interfere others’ work
- Salesforce internal objects are queried and the Tooling API is invoked
Basically, what I have done has been to use components/functionalities/ data that already exist within the Salesforce platform, and put them to work to solve the problem, again demonstrating the great versatility and power that we have available.
I have used only 2 objects:
- QueryCostHistory__c: which stores the results of the scans of all the objects, whether they are List Views, Reports, Dashboards or research of User Logs in search of SOQL Queries
- Process_Log__c: that allows me to pick up traces that I do not want to be shown or that are not shown in the Salesforce Log
For tools like this, I think it’s important to create the least number of additional objects.
Classes and VisualForce Pages
The set of classes and pages that I have created:
- 4 VisualForce pages
- 11 APEX classes
The execution scheme is very simple:
- From main Controller QP_MainController, the rest of the Controllers are invoked to obtain the information. Each controller obtains the list of the objects it manages (ListViews, Reports, etc.) within the period selected by the user, and deposits it as pending analysis in the QueryCostHistory__c object
- The Controller QP_ToolingAPIRequester is responsible for invoking the Tooling API to obtain the Query Plans, going through the table in search of the pending ones. It invokes the API to obtain with Batches, and records the results in the same object.
- The Controllers that perform element searches are called QP_GetLast *
- Subsequently, the Controllers and the VisualForce detail and historical pages are responsible for showing the individual and accumulated results respectively to the user.
- The Controllers that show data are the so-called QP_Show *
- Finally, there are 2 objects to support the storage of data in lists
Characteristics to be highlighted
To obtain the information of the ListViews, Reports, Dashboards used in a period, internal tables are consulted, neither new data is created nor strange things are done.
All processes implement the @Batchable: as I mentioned earlier, I’m obsessed with the fact that the users’ work is not affected by the tools I use, nor do I want the limits to be affected – so, I use the @Batchable interface and the segmented execution using chunks, which is a parameter of the Database.ExecuteBatch method. Thus, no matter how large the volumes are, the executions never exceed the limits and the consumption of resources of the Flex Queue and the asynchronous queues is minimal (which are always lower priority than the synchronous consumption, of course).
Since I use asynchronous resources, this allows me to launch the requests in parallel. If we plan (through @Schedule) the process nightly, for example, you can launch the processes simultaneously, consuming 1/4 of the time you would consume if you had to throw them in one after another.
I have not implemented either the @Schedulable interface or a warning system, because I did not want to extend the code unnecessarily, and keep it readable. You can already find out how to do it with total security, because there is a lot of documentation available; however, it’s sufficient to just implement the @Schedulable interface in the Controllers. To generate events and warnings, each teacher has their methods and preferences…
With my knowledge of the platform, I have been able to extend it to create functionality that solve obvious problems in any Org that is expanding or changing data. The tool ensures that there is no risk of production incidents creeping up that are difficult to diagnose – and that could save your head going forward!
The complete code of this tool is available in this Bitbucket repository, so you can use it as you want, and improve it.
I hope it helps!