Developers

Build Your Own CI/CD Pipeline in Salesforce (Using GitHub Actions)

By Pablo Gonzalez

As Salesforce has grown in complexity over the years, so has the process for deploying changes from sandbox to production. This added complexity has brought to light the limitations of Salesforce change sets.

In turn, this has led to the rise of many Salesforce DevOps vendors (Copado, Gearset, Flosum, Salto, BlueCanvas, etc.) that can help you to automate your deployments through what is known as a CI/CD pipeline.

Why Should You Learn CI/CD?

If you are new to the concept of CI/CD (continuous integration/continuous delivery), I highly recommend this article by Gearset as well as their excellent guide to version control for Salesforce. For the purpose of this article, let’s go with the following definition:

A system that automates the deployment of changes from sandbox to production. This system also allows developers to collaborate on said changes and see a full history of who deployed what and when.

Returning to the topic of DevOps providers, the tools they offer provide immense value to the community. However, I think it’s very important that Salesforce developers and architects fully understand what these tools are doing behind the scenes, or what a good CI/CD pipeline actually looks like.

In this tutorial, I will show you how to create a complete CI/CD system using GitHub Actions (one of the many free CI/CD tools out there). You will have the knowledge to either create your own home-grown CI/CD system, or learn exactly what to look for when shopping for a Salesforce DevOps tool.

That said, I want to acknowledge the many tutorials and YouTube videos out there that show how to create a Salesforce CI/CD pipeline from scratch. So what’s new in this one?

This tutorial focuses on the Org Development Model – let’s find out more.

Org Development vs. Package Development

Prior to the launch of Salesforce DX, most Salesforce orgs were following a typical approach to releasing new features. They would make the changes in a sandbox org and eventually deploy those changes to production (while also pushing the changes to some intermediate sandboxes for testing, UAT, etc.). This is known as the Org Development Model.

With Salesforce DX, Salesforce introduced a new way of deploying changes, known as the Package Development Model. In this model, you are supposed to break down your metadata into logical packages (one package for service cloud, another for finance, etc.) and develop and test those packages in scratch orgs.

I highly recommend following the links above for more information about these two development models.

One unfortunate side effect of the introduction of this new model is that the great majority of tutorials for Salesforce CI/CD focus on the Package Development Model.

In reality, the majority of Salesforce customers are still following the Org Development Model, so there is a need to educate the community on how to implement a CI/CD system for this model.

Prerequisites

This is a highly technical and hands-on tutorial. Ideally you will already have a general understanding of the following:

You may already be an expert on this topic and simply want to see what my take is without having to read the entire article. If that’s you, just head over to the GitHub repository, which only includes the interesting bits!

Your Sandbox Strategy Matters

A Sandbox strategy is a logical way to organize your Sandboxes for the benefit of your development process. The strategy needs to consider the following questions:

  • Will each team member have their own Sandbox?
  • How many Sandboxes will you use for testing (QA, UAT, load testing, etc.)?
  • What will the Sandbox type be for each Sandbox?

I recommend watching this session for more information about this topic.

Your sandbox strategy affects your CI/CD pipeline because you need to choose which operations to automate in the context of how many sandboxes you have. For example, should you automate deployments to the UAT Sandbox? Perhaps this org is used by many business users and you are not comfortable with having changes automatically deployed there. Do you create a git branch for each sandbox? (I will touch on this later.)

For the purposes of this tutorial, let’s assume you have a medium-sized development team and that each member has their own Sandbox. A typical Sandbox strategy would work as follows:

Metadata to Track

Version control is at the core of a CI/CD system. Your Salesforce metadata needs to be stored in a remote Git repository that your CI/CD system can read. This is what allows you to automate actions like deploying only the metadata that has changed, or being able to see the history of a particular metadata item (like an Apex class) over time.

Before you can track your Salesforce metadata in Git, it needs to be exported into files on your computer – I will show you how to do this.

This brings up an interesting question: Should I track all the metadata in my org? While it’s certainly possible to track all the metadata and make your Git repository into a replica of your org, this is not recommended. Why not?

Salesforce metadata is exported as XML files. Some files, like Profiles, can be huge and non-deterministic – this means that the XML representation of them is not always the same, even if nothing has changed.

Also, the larger the XML file is, the more complicated it is to understand different views in Git, as well as what has actually changed.

With this said, my recommendation is that you don’t track all your metadata in version control. This means not everything will be automated via the CI/CD pipeline. You might still need to make some changes manually in production or deploy through change sets – that’s ok.

As a bare minimum, I recommend tracking all code-based metadata types (LWCs, Apex Classes, etc).

Step 1: Downloading the Metadata in VSCode

As explained earlier, at the core of a CI/CD system is the ability to store the Salesforce metadata in a remote Git repository. The first step is downloading the metadata of your org to your computer. From there, you will upload it to a remote repository.

For this, you are going to use VSCode, create a project with manifest, and authorize the production org.

Once the org is authorized, the package.xml file is what you can use to tell Salesforce which metadata you want to download to your computer.

While you can edit this file manually, I highly recommend using the Salesforce Package.xml Generator extension to populate the file with all the metadata types you want to track. Kudos to Vignaesh – this extension is awesome!

Once you have selected all the metadata types, your package.xml will be updated.

Then, right-click anywhere in the file and select “Retrieve Source in Manifest from Org.”

Now the default directory should have all the metadata that you decided to track.

Step 2: Tracking Metadata in Git and GitHub

Now that you have a local representation of your Salesforce metadata, you need to upload it to a remote GitHub repository. This is needed so that GitHub Actions (our CI/CD engine of choice for this tutorial) can read the metadata of your org.

The first step is to make the entire SFDX project folder a Git repository locally. To do that, you are going to take a snapshot (known as a commit) of the entire project using the following commands:

Now the entire SFDX project is tracked in Git, but only locally. To track this in GitHub, you now need to create what is known as a remote repository. This will host the exact same contents of your SFDX project but in the cloud, where GitHub Action runs.

So head over to GitHub and create a new repository.

Once the remote repository has been created, you need to go back to your terminal in VS Code and push the local repository. You can do this with:

Now you have the metadata you want to track in your GitHub repository.

Step 3: Choose a Branching Strategy

Now it’s time to decide which Git branching strategy to use. I’m going to use the Gitflow Workflow – I highly recommend reading this article as it does a really good job of explaining how this works.

That said, I want to point out some challenges with this strategy.

The strategy assumes that when a feature branch is merged into the development branch, the developer is confident that the feature is complete. In the case of Salesforce, merging into development would trigger a CI job that would deploy that feature to the integration/UAT sandbox. Then, when all the testing has passed, we merge the development branch into master (via a release branch).

This assumes that everything in development can be merged into master. In other words, it assumes that the UAT sandbox and the development branch represent a deployable state to production. However, in the real world, things are not that simple.

Salesforce orgs are messy, and just because a feature passes QA in the developer’s sandbox, it doesn’t mean that it will pass UAT and that it is ready for production. Why not?

  1. Maybe the developer’s sandbox was missing some key data that was needed to really test the feature end to end.
  2. Maybe everything works correctly, but a key business user who was supposed to perform UAT is out of office unexpectedly.
  3. Perhaps QA and UAT passed, but the business decides to delay the go-live date of a specific feature.

Many companies adopt a different sandbox strategy if there’s a dedicated QA/Integration sandbox and another UAT sandbox. For example:

This gives you a little more control to say that “UAT is a deployable state to production”, but the arguments I made above are still true, and you cannot always guarantee that you can deploy the entire UAT org (or its branch) to production.

Here’s another problem with this approach – the more sandboxes you introduce, the more likely they are to fall out of sync, which can cause huge problems in the development process. Platforms like Salto that have really granular org-comparison and reconciliation capabilities can help with this.

Because of these problems, I think the best strategy is Copado’s branching strategy. Rather than merging one entire branch into another, you can create a promotion branch, where you can choose which stories to actually move to the next branch environment.

The CI/CD configuration that we are going to implement in this tutorial will follow the ‘happy path’ described in the Gitflow branching strategy, where one branch can be merged with another entirely.

Step 4: Choose a CI/CD Server

So we have our metadata in GitHub, and you’ve decided to use Gitflow as your branching strategy. The next step is to choose a CI/CD server that will automate some of the steps for you. We haven’t yet discussed what those steps are – hang in there!

GitHub Actions is GitHub’s native framework for automating CI/CD jobs. Rather than explaining in detail how it works (check the link provided), I’ll give an overview of how this and other CI/CD servers work (like GitLab CI/CD, Jenkins, etc.).

I actually like the description that Vernon Keenan uses – he calls these tools “command servers” because that’s really what they are.

When you are running SFDX commands on your computer, the reason this works is because you have the SFDX CLI installed on your computer (obviously), and because the directory on which you are running these commands is an SFDX project. If you wanted to automate some SFDX tasks (retrieving metadata, deploying, etc.), where would this automation take place? On your computer or someone else’s computer? Does that mean that the specific computer needs to be on at any given time?

This is where CI/CD servers come in. These servers run in the context of your remote Git repository and load it into a virtual machine (VM). These VMs are hosted by the CI/CD provider: Jenkins, GitHub Actions, etc.

The CI/CD provider will then give you a way to install software in that VM, such as Node.js, sfdx-cli, etc. So the exact same commands and Git operations that you can run on your computer can now be run on-demand on this VM.

And because your Git repository is essentially an SFDX project (and we can authorize any SFDX project against any Salesforce org), these VMs can execute SFDX commands against your org, such as deploying the new metadata from the feature branch. Magic!

Step 5: Decide Which Steps to Automate (and When)

So far we have:

  • Decided which metadata to track.
  • Uploaded it to GitHub.
  • Decided which sandbox and branching strategies to use.
  • Understood how CI/CD servers work.

Now we need to decide exactly what to automate. What and when you decide to automate depends entirely on your business needs. For this tutorial, I decided to automate four actions, when three different events occur. This assumes that you are using the Gitflow workflow I explained earlier.

Action 1: Deployment

I want to automate the deployment from one org to another. At what point this is done is not relevant right now, we just need to know that this is something we want to automate.

Also, I want to be able to specify if this is a real deployment or a check-only deployment – in other words, a deployment that only validates the metadata can be deployed, without actually committing the changes to the org.

Action 2: Run tests specified by the developer

I also want to automatically run the Apex tests that the developer specifies. How they specify these tests and when these tests should run is something we’ll think about shortly.

Action 3: Scan the code

I want to automatically scan the code the developer changed or created, and log any issues directly on GitHub.

Action 4: Identify only the metadata that has changed

Finally, I want to automatically identify what metadata was changed or created by the developer.

In terms of when we want these steps to be automated, this is what I’ve decided:

Event 1: When the developer opens a pull request

According to the Gitflow Workflow, the developer will open a pull request to merge their feature branch against the development branch. A pull request gives the development team an opportunity to review the changes before merging them into the next branch.

So when a pull request is open, I want to:

  • Do a check-only deployment against the Integration/QA sandbox (Action 1) while only deploying the metadata that has changed (Action 4).
  • Run only the tests specified by the developer (Action 2).
  • Scan the code for any vulnerabilities (Action 3).

Event 2: When the pull request is approved and merged

Once the development team has approved the pull request, they will merge it into the development branch. At this point, I want to:

  • Actually deploy the metadata that has changed (Actions 1 and 4) to the Integration/QA sandbox.
  • Run the tests again (Action 2).

Event 3: When the development branch is merged into master

Deploy the entire master branch to the Production org and run all the tests.

Demo

The end-to-end process is much easier to understand if you see in action:

Step 6: GitHub Actions Configuration

Everything that we’ve discussed so far has led up to this moment. Let’s see the GitHub Actions in action (pun intended!).

GitHub Actions are configured in yaml files under the .github/workflows directory. Let’s have a look at the workflow that will run when a pull request is opened against the development branch:

I’ve added detailed comments to the file that provide you with all the information you need to understand what is going on. Rather than explaining everything here again, I’m going to explain a few of the more interesting bits, but I highly recommend you read the file in its entirety.

How to: Deploy Delta Changes

I’m using sfdx-git-delta to deploy only the metadata that has been changed (or created) by the developer.

Some people in the ecosystem favor deploying the entire branch (i.e. the entire SFDX project to the target org). This is because the idea that the entire branch is deployable (as we discussed earlier) is attractive, and it makes Salesforce look like a traditional software app that needs to be recompiled every time a change is made.

Personally, I don’t agree with this. Your Salesforce org will never need to be deployed to another empty org or re-compiled in its entirety. I also mentioned in a previous section that it’s not so simple to have your UAT org (or its branch) in a deployable state to production in the real world.

In the end it’s up to you. If you want to deploy the entire branch, simply use the deploy command against the force-app directory.

How to: Specify Which Tests to Run

I allow the developer to specify which tests to run by using a special syntax in the pull request body:

The link I provided above provides detailed information on how I parse this text and pass it over to the SFDX CLI to specify which tests to run.

This will make your deployment run much faster when compared to running all the tests in the org.

How to: Scan Your Code for Vulnerabilities

Finally, I’m using the SFDX scanner to scan the code in the delta directory. I decided not to fail the entire job just because there are warnings. Instead, the warnings are logged directly in the PR for your team to review. In my opinion, this is better than failing the job because it allows your team to review the code and have a conversation about it. If the same warning shows up every now and then, it might be worth configuring the job to fail.

Summary

Now you have all the knowledge required to configure a CI/CD pipeline with GitHub Actions. If you’d like to create a similar pipeline with a different provider, check out these sample repositories.

The Author

Pablo Gonzalez

Pablo Gonzalez is the creator of HappySoup.io and a Business Engineering Architect at Salto, the Business Engineering platform for Salesforce engineers.

Comments:

    Bryan ANderson
    April 18, 2022 7:46 pm
    The only comment I would make is instead of downloading and building sfdx from source (https://github.com/salto-io/salesforce-ci-cd-org-dev/blob/master/.github/workflows/pr-develop-branch.yml#L94), you should be leveraging the official SFDX Docker image that SF created in your GitHub Action (https://github.com/banderson5144/funcondemo/blob/master/.github/workflows/sfdxvalidate.yml#L18) https://hub.docker.com/r/salesforce/salesforcedx
    Frank Peleato
    April 18, 2022 10:22 pm
    Great article, thank you for sharing! We have a similar setup in our company but you gave me some good ideas on how to simplify some parts of it. Question: Does SFDX Git Delta pick up related files? For example, if you modify a .cls class, is the related meta file also copied to the changed-sources directory for validation/deployment?
    Pablo
    April 20, 2022 4:08 pm
    Hi Frank, I think it does! You can reach out to the author on the git repo, he's pretty responsive! https://github.com/scolladon/sfdx-git-delta
    Michael Soriano
    July 08, 2022 8:46 pm
    Hi Pablo - great work on this workflow. Learning a lot from it. I have a couple of questions: 1) For the code scanning part (sfdx scanner) - I'm getting an error "Advanced Security must be enabled for this repository to use code scanning" - which looks like a setting in "Advanced Security" in the repo - but looks like its a "paid" feature? I commented out that section for now - just to be able to continue. 2) I noticed in the 2 deploy workflows (push-develop-branch and push-master-branch) you're doing a full branch deploy. In my package.xml - I only have a couple of Apex classes, and an LWC component - and I'm seeing errors on other Apex classes that's not included in the branch. Are the repositories designed to have ALL metadata from the org? Is there any way to just do files that changed - just like the pr-develop-branch (sgd delta)? Thank you.
    Laura
    August 13, 2022 6:22 pm
    Hi Pablo! great work on this workflow. ¿Is there any way in the master deploy workflow ( push-master-branch) to run only the tests specified by the developer? Similar to the pull-request workflow? I did it using a text File that we must fill. Thanks for sharing your knowledge.
    Pablo
    August 15, 2022 12:23 pm
    I haven't tested it out but I think it should be possible to still get the original PR body and run those tests. Sorry, haven't had a chance to explore this further.
    Laura
    August 16, 2022 2:47 pm
    thanks for your answer :)
    Irfan S
    September 29, 2022 4:16 pm
    Hi Pablo, I am using your solution in my project and when I create a pull request with a specified test class name in the comment. Then my run apex test fails with 'Fatal error', 'Code Coverage Failure. Your code coverage is 0%. You need at least 75% coverage to complete this deployment.' Screenshot - https://imgur.com/a/82qCQIx The same test will pass when Runalltest is done and if I run the test manually, again it passes with more than 75% code coverage. I tried adding the class and test class in my PR but still it fails. I am out of ideas why this is failing with 0% coverage. Any suggestions?
    Salvador Hernandez
    October 13, 2022 1:12 am
    Please, This was working fine until today when I started getting the following error. Can someone please point me in the right direction on where to look? Create delta packages for new, modified or deleted metadata Run mkdir changed-sources mkdir changed-sources sfdx sgd:source:delta --to "HEAD" --from "HEAD^" --output changed-sources/ --generate-delta --source force-app/ shell: /usr/bin/bash -e {0} TypeError: Invalid Version: 3.31.04 Error: Process completed with exit code 1.
    Laura Malavé
    October 13, 2022 1:23 am
    Hello Pablo, I hope everything is well. I have an error ( TypeError: Invalid Version: 3.31.04) that started to occur today in the workflow, specifically in the 'Create delta packages for new, modified or deleted metadata' step, do you have any idea what happened? Thanks in advance
    Laura Malavé
    October 13, 2022 10:44 pm
    Hi! to fix it, I did this for each workflow: steps: # Install nodejs - uses: actions/setup-node@v3 with: node-version: '14' # Checkout the source code - name: 'Checkout source code' uses: actions/checkout@v3 with: fetch-depth: 0
    Greg
    November 17, 2022 5:10 pm
    Hi Pablo, looks a great job so i'm trying it out on my github, but i'm facing an issue when running the job, openjdk-8-jdk fails to install with this error Err:2 http://azure.archive.ubuntu.com/ubuntu focal-updates/universe amd64 openjdk-8-jre-headless amd64 8u342-b07-0ubuntu1~20.04 404 Not Found [IP: 52.147.219.192 80] i tried to fix ubuntu version to 22.04, but don't changes the error. of course, the actions stop running and send an error report because of this. Any idea on how to fix that ?
    Pablo
    November 20, 2022 2:21 pm
    Hi Greg, It seems there was a problem with Java 8. I've updated the repo and now the yaml file will install the latest version of java (11 at the time of this writing). Additionally, you can use https://ci.salto.io/ to generate the pipeline automatically for you. Feel free to contact me on LinkedIn if you have more questions https://www.linkedin.com/in/pablis/
    Pablo
    November 20, 2022 2:27 pm
    Hi Michael, Apologies for the late reply. You should be able to use sfg delta in the other workflows files to do the same, deploy only that which has changed. Have you tried that?
    Wayne Chung
    March 15, 2023 9:26 am
    Hi Pablo Thanks for this sharing, really learned a lot from this article which inspire me to implement this amazing solution in our org as well. One quick question, the only thing we nned to do is just copy and paste the whole ".github/workflows" folder from your sample repository into our repostory which we want to kick off this workflow, right?
    Varun Pandhi
    May 09, 2023 11:26 am
    Hi Pablo, Can you please help? I tried with this, but it runs into below error in step "Authenticate to Integration Org"- 3s ##[debug]Evaluating condition for step: 'Authenticate to Integration Org' ##[debug]Evaluating: success() ##[debug]Evaluating success: ##[debug]=> true ##[debug]Result: true ##[debug]Starting: Authenticate to Integration Org ##[debug]Loading inputs ##[debug]Loading env Run sfdx auth:sfdxurl:store -f ./SFDX_INTEGRATION_URL.txt -s -a integration sfdx auth:sfdxurl:store -f ./SFDX_INTEGRATION_URL.txt -s -a integration shell: /usr/bin/bash -e {0} env: APEX_TESTS: ontactTriggerHandlerTest ##[debug]/usr/bin/bash -e /home/runner/work/_temp/e2d7c6c2-3f85-4cfc-93cb-612d5354e761.sh (node:4357) Warning: Deprecated environment variable: SFDX_AUTOUPDATE_DISABLE. Please use SF_AUTOUPDATE_DISABLE instead. (Use `node --trace-warnings ...` to show where the warning was created) (node:4357) Warning: Deprecated environment variable: SFDX_DISABLE_AUTOUPDATE. Please use SF_DISABLE_AUTOUPDATE instead. Error (1): Invalid SFDX auth URL. Must be in the format "force://::@". Note that the SFDX auth URL uses the "force" protocol, and not "http" or "https". Also note that the "instanceUrl" inside the SFDX auth URL doesn't include the protocol ("https://"). Error: Process completed with exit code 1. ##[debug]Finishing: Authenticate to Integration Org
    Tim Williams
    June 14, 2023 8:15 pm
    Late comer to this. Thanks for this. I am looking at this from a BA's perspective, specifically for testing purposes. I appreciate the effort put into this.

Leave a Reply