CICD With Kai Tribble
Table of Contents:
Today’s post is by guest author Kai Tribble, Senior Software Engineer at GrubHub. Kai and I first met while working in different divisions of the same company in 2018. At the time, he was using Docker to create containerized deploy scripts for his Salesforce org. We were working on build pipelines for our respective Salesforce orgs, but I was impressed that his setup could easily be tested locally. My own shell scripts looked pretty shabby in comparison! We both went our separate ways in relatively short order, but at the start of the pandemic I reached out to him — both hoping to stay in touch, and to see if he was interested in writing a guest post on his own CI/CD journey. I’m glad now to be able to present that post to you, so without further ado — let’s dive in!
This article contains advanced Salesforce development topics, and this approach is opinionated. While I’ll go through what tools/approaches are used and why, I won’t go into much depth about the rationale for those choices in this particular article. The goal with this article is to provide a possible patterns, outcomes and goals of a sane Salesforce DevOps process before you dive into the work.
- You’ve completed the SFDX trail.
- You’re using a monorepo for your Salesforce environment.
It’s no secret in the Salesforce developer community that a constant struggle exists within the migration of code written in a sandbox environment to Production. As Salesforce engineers, we’ve been given a variety of toolsets including Change Sets, ANT, and more-recently, SFDX to simplify and automate migrations. Highlighting the latter, I’d like to dive into how we can use SFDX (alongside some helpful plugins) to fully automate not only your deployment process, but other crucial parts of the application development lifecycle.
Let’s start with answering the “simply why” question. Why setup our own CI and CD scripts? Here were the things we were looking for in order of importance.
- We needed to know for sure that something we were working on would not only pass apex unit tests, but any other automated tests we wanted to consider. We also wanted to know whether or not those changes could be deployed successfully without much back and forth with the Salesforce Deployment tooling. This starts with establishing a base source of truth, and in our case, that’s the repository.
- Developers and admins of varying familiarity and skillsets will be making updates to our repository. An admin working with us can do everything from spinning up a new development environment to deploying changes to Production. Of course, those features are built within Salesforce already, but refreshing sandboxes from the Setup menu can be time-consuming depending on the size of that sandbox’s source org and Change Sets are, well, change sets. It actually takes less clicks for a user within our repo to do either of those things, and that’s intentional.
- A rule of thumb we tend to use is “Automate what you can, and what you can’t, add to the list of tech debt items to automate later”.
- I didn’t want our team to ever be in a spot where we couldn’t move forward on a deployment because of a bug in another tool we depended on. Each team member owns a feature from grooming/assignment to release. Included in that work is devops, so the health of our CI/CD flow becomes a team responsibility. Ownership empowers our team members, and encourages them to try new things and learn new concepts and programming languages. The result is a team that’s able to deliver more impactful, less Salesforce-centric solutions.
Another couple of advantages to our process are:
- Conflicts occur far less frequently and can be mitigated easily.
- Feedback is given to the initiater without context-switching.
With these three tenets in mind, let’s go into how our org went from change sets and unverified production changes to a reliable, easy-to-use, and flexible system.
At this point, you should already have the following configured:
- a repository. We use GitHub.
- a build automation tool. We use GitHub Actions.
- a Connected App to your Dev Hub. This is Production in our case.
Behind the scenes, we have a few shell & python scripts, yaml files, and even have some anonymous apex. To keep things simple, let’s focus on the GitHub Actions powered by yaml files. You can invoke a GitHub Action through a variety of ways, but to keep things simple, we’ll use workflow dispatch actions, allowing us to run a job based on user input.
Side note: as a rule of thumb to avoid malicious injection attempts, any workflow dispatch input should be stored as an environment variable for the job.
Create a sandbox for each contributer. A “sandbox-per-developer” approach is not novel, but how our team authenticates into them may surprise you. When establishing a regular sandbox refresh strategy (twice a week for our team), I needed to get creative with how users would access these environments without production credentials. More into that in the next section.
There are a couple of ways Dev Hub authorization is typically handled:
More secure: JWT OAUTH 2.0 token authorization is the preferred approach for handling Dev Hub authorization because it requires decryption to validate a server certificate. Server certificates are validated using server keys (this is what’s encrypted and stored in your repo). Use a reputable tool to handle your encryption and decryption. GitHub recommends GPG. Store your generated secrets in a secure vault.
Less secure: Auth URLs can be used to dynamically log into orgs that have previously been accessed using the Web-based Authorization flow. The output from sfdx’s
org:displaycommand will output the auth url for whichever org is set as its default.
Either of these approaches requires the CI build runner to access the generated secrets. GitHub allows secrets to be stored as environment variables in the repository. Those secrets are obfuscated in build logs.
Every time a sandbox is refreshed, its client id and client secret are invalidated. While it’s a totally plausible strategy to store each sandbox’s username, client id and client secret as individual repository secrets, that management can become cumbersome. Here’s an alternative approach:
- Create the Connected App used to handle authorization inside a Developer Edition org.
- Authorize the connnected app into Production using the production url, but the Developer Edition’s Client Id.
The only caveat to this approach is that developer orgs expire after a year of inactivity. If you’re refreshing the server certification annually, that shouldn’t be an issue. However, for those using CA-Trusted certificates for their Dev Hubs, you need to login at least once a year into your Developer Edition org, otherwise it will be marked for deletion. Now, you can authorize into any sandboxes refreshed from the Dev Hub using the same server key, client id and secret combination (all already obfuscated and securely-stored). Shout-out to Anthony Heber for this awesome trick.
Within our repository exists a mapping configuration of GitHub contributers to sandboxes. Using GitHub Actions’ established environment variables, we can determine the GitHub username of the actor of the workflow dispatch. That allows us to dynamically determine which sandbox is which to cycle and produce temporary credentials sent to the user. Note, I specified sandbox and not user. There exists a service account in Salesforce for the sole purpose of sandbox authorization. Its username is constant, so the job only needs to append the
.sandboxName suffix assigned to the user in the contributer configuration file.
Scratch orgs are much simpler. In our case, users can specify the packages they’d like to install into their new scratch org. Attempting to create scratch orgs that mimicked production became an arduous task for our team, so we use scratch orgs for Unlocked Package development only. Unpackaged (
force-app/ adjacent) work and Org-Dependent packages are iterated on using sandboxes.
Back to the workflow dispatch action. You can specify a
choice input type with the options of the types of dev environments to create. You can also specify a defualt input type with the packages you want to install into a scratch org listed as comma-separated values (see side note mentioned earlier about avoiding injection here).
Now, from the user’s perspective, they would only need to navigate to the Actions tab, select the type of org they need and, optionally, the packages they wish to install into their selected org. After a few minutes, they’ll receive temporary credentials (use whatever approach you want here, but don’t output credentials in your logs) for logging into their ephemeral environment.
Naming conventions are overlooked far too often (heck, I even posted them last in this section). Our classes, fields, even packaged folder structures are documented and enforced. PMD can be used in combination with other tools to validate these standards and patterns are being followed. Not only are the files in your package easier to find in your local development environment, they’re also easier to spot in the Salesforce UI.
Deciding your branch strategy is your biggest enabler for continuous delivery. Most Salesforce examples and recommendations include using an intermediate branch (such as
release/) to facilitate deployment into a persistent environment. However, that’s a conflicting tenet with continuous delivery 1:
Branches should be viewed with suspicion, and long-lived feature branches and branches for deployments should be avoided. Pull requests should be minimal and deployments should only be batched if necessary.
How does this look in practice though? Here are the steps we take to get from a feature to Production:
- Create a feature branch and save the changes from the environment into that feature branch.
- Create a pull request into the
- Make sure your branch is up-to-date with
- Manage any deployments into the end user testing environment using pr comments or labels. Both can invoke GitHub Actions.
- After GitHub branch checks pass and user acceptance testing scenarios are confirmed, squash your changes and merge into the
- Create a tag from
mainto start the deployment into your Production environment.
If a deployment causes issues because of unaccounted for edge cases, you can easily rollback by either:
- Reverting the offending feature branch merge, then deploying into the end user testing environment and Production. Required for rolling back code changes.
- Deploying the previous tag to the end user testing environment and Production.
- (For unlocked packages) Installing the previous major.minor version of the package.
All of these can be accomplished by using sfdx plugins and GitHub Actions. SFDX Git Delta is a fantastic, responsive, open-source plugin that auto-creates
destructiveChanges.xml files based on the git diff in a specified directory. However, it can only provide most of the lift for reverting code changes that reside in unpackaged directories since they require Apex Tests to be run for coverage upon deployment. A simple deployment artifact can supply the tests you need to run for the revert.
Here’s the first place you’ll encounter conflicts, and the most effective way to avoid them is communication. Communication in this case is not from dev to dev, but rather from Pull Request to Pull Request. Our repository uses the GitHub CLI to automate communication to devs if their work contains potentially-conflicting files as another PR that’s currently undergoing end-user testing. If the feature branch isn’t up-to-date, developers receive a message stating they need to rebase their feature branch prior to deploying to UAT and the action ends. To discover Unlocked Package conflicts, labels are automatically assigned to PRs if changes to that package are made in the feature branch. Finally, if any artifacts aren’t merged into
main post-Production Deployment, users receive a notification to merge those automated PRs and that deployment is halted.
specifiedTests — Ideally, you want to run all local tests with each and every deployment. However, most of us have inherited legacy Salesforce environments littered with an overabundance of integration and end-to-end tests. In our case, the legacy tests contained methods without assertions, unnecessary object construction, failures, and the unholy relic of
SeeAllData=true. While these should be refactored as they’re encountered, we still want to keep our changes small. In order to do this effectively, test classes that need to run are listed in a file that a bash script reads into the
force:apex:tests:runcommand. Don’t just include the test for your class, but include the tests that touch the process start to finish to avoid deploying bugs. This is not needed for unlocked packages since tests are run during the
sfdx force:package:version:promotestep prior to production deployment.
sfdx-project.json — Every in-house-built unlocked package is postfixed with
@latestin our org.
sandbox-allocation.json — As mentioned before, this file determines which sandbox to retrieve based on the GitHub action’s actor’s username.
As Salesforce adopts more modern practices for their development and deployment actions, we should keep in mind there are plenty of tools available and open source we can use to facilitate our adoption of these techniques. Hopefully, I’ve provided some easy tips to get started with, but you can dive in deeper by experimenting with custom plugins for your org. You can even build your own SFDX plugins that allow you to extend the SFDX CLI using Typescript.
James: a big thanks to Kai for contributing this post and for giving us a peek behind the scenes at GrubHub! Make sure to check out TypeScript & SFDX for more DevOps-style work.
Only one footnote this time around 😇