Record-triggered flow has been around for a few years now and has fundamentally changed the way “declarative developers” (who use Salesforce Flow to develop without code, as opposed to “programmatic developers”) operate within Salesforce. Historically, if a flow needed to be triggered when a record was either created or edited, the autolaunched flow would need to be called through either Process Builder or Apex. If a flow needed to be called when a record was deleted, this was something that could only be handled through Apex, as Process Builder did not allow handling of deleted records.
Personally, I’ve been using record-triggered flow since it came out, and have advocated for Flow to be the only declarative automation tool used in new environments (where possible) for years. There are a number of things I have learnt along the way that I wish I’d known sooner, and I wish I could pass on this knowledge to my younger self. Instead, I’ll pass it on to the community – hopefully you can benefit from learning these lessons all at once!
1. Before-Save and After-Save Flow Behavior
If you have never worked with Apex triggers before, it’s likely that you won’t understand the significance of the ability to optimize for fast field updates vs actions and related records. Essentially, this update allowed for the ability to have your Record-Triggered Flow run either before or after the record was saved to the Salesforce database.
It’s worth noting that Before Flows are fired just before the Before Apex triggers, and the After Flows are fired after the After Apex triggers (you can read more about the order of execution here).
What this means is that you can change values on a record before the user’s current action has been committed, which significantly reduces the number of DML statements that a flow will use when making changes to the primary record. It also means a performance increase when compared to the way Flow used to work – if you needed to make a change to the primary record, it would first need to be committed to the database and then updated again, causing a slight delay when processing.
Understanding the power of splitting functions between running before or after the record is committed is something that a lot of newer Flow Developers may take for granted before they realize the extent of the impact. You should design your flows carefully and make use of this ability.
2. When to Use Asynchronous Paths
Speaking of performance enhancements, the ability to have some actions executed asynchronously (meaning your transaction is not held up waiting for these actions to complete) can significantly improve the performance of your flows. This ability has been around since the Winter ‘22 release and was the first declarative tool that could easily handle asynchronous processing.
If you notice that your flows are taking a significant amount of time to run and are causing delays for end users, you may be able to make use of an asynchronous path within your flows. If your flow is reaching out to an external system, for example, you should be using asynchronous actions to run this functionality in the background, as these sorts of functions can add significant amounts of time to transaction duration.
Flow Developers should be making use of asynchronous paths where it makes sense to do so – especially when executing a function in another system or when a particular flow contains a lot of functionality and users are noticing a delay in their workflows.
As an Admin/Declarative Flow Developer, you should work with any programmatic developers (code-writing developers) within your org to determine where you should be using asynchronous paths in your flows. There may be an object and an event (Opportunity Create, for example) that executes a lot of complex Apex code when executed. This will be an event that you don’t want to layer on additional processes within a record-triggered flow, and you could use an asynchronous path to increase performance.
3. Record-Triggered vs. Scheduled-Triggered for Some Tasks
There may be some functions that your org needs to run against a record that can wait until after hours and is not required immediately. Sometimes an asynchronous approach to record-triggered flows will be the best answer, and other times it may be that a schedule-triggered flow that handles multiple actions together at the same time (usually outside hours) is the best approach.
Consider your callouts to other systems, for example. Although it’s a good idea to keep some of them running asynchronously, it may be worth assessing which of these are time-sensitive, and which can be run in bulk during off-peak times. For those that are not required to run immediately, you can consider spreading your functionality out using a combination of record-triggered and schedule-triggered flows.
Schedule-triggered flows are run automatically and are run as the specified Automated Process User (you can set this user in Automation Settings in the Setup Menu). You will need to ensure that the Automated Process User has the correct level of access to all fields and functions specified in the flow. You can read more about schedule-triggered flow considerations on the official Salesforce Help doc here.
4. Bulkification Nuances
This is one that I learnt about more recently and wish I had been aware of earlier – the way you’d think Salesforce Flow would bulkify transactions may not be the way it actually handles them.
I recently read a post by Melody Lwo (Salesforce Flowsome) in which she ran an experiment on Flow Bulkification following a discussion she had with Alex Fram-Schwartz (FlowFanatic). Melody discussed the flow limit that Salesforce had mentioned in their General Flow Limits document: the maximum number of executed elements at runtime.
Melody created and ran a test that explains how Flow handles multiple records running through a schedule-triggered flow, and how this can be used to optimize Flow design. These same learnings can be applied to make your record-triggered flows more efficient, and should also be front of mind when splitting some functionality off into a schedule-triggered flow.
Make sure to give Melody’s post a read – it’s jam-packed with valuable information!
5. Don’t Forget About Subflows!
Finally, the ability to call another flow from a parent record-triggered flow was announced in the Winter ‘22 release.
This allowed Flow Developers to break down their automations into smaller building blocks, and have these autolaunched Subflows called from multiple places. Doing this meant that complex Flow automation that was reused in multiple places would be more easily maintained, as they would only need to be updated once to apply everywhere. It also allowed Flow Developers to keep a cleaner Flow Canvas while building.
Subflows should contain information that can be called from multiple parent flows and run dynamically by using input variables. By doing this you can ensure the functionality is reused in more places, and overall the Flow landscape in your org requires less upkeep. You may even want to consider passing a “Mode” variable into the Subflow, to allow for multiple similar pieces of functionality to all be kept together in a single Subflow. Then, the path that is followed in the Subflow is determined by the value of the “Mode” variable.
There you have it – from asynchronous paths to Subflows, there are five important lessons I’ve learnt during my journey with Record-Triggered Flow that I wish I could have learned sooner.
Hopefully you’ve learnt something new and are able to take on these insights far earlier in the process than I did! Share your experiences in the comments below.