SFDX is the most common buzzword among Salesforcies right now. If you know it, I am sure you would have been tempted to try it out and might even have incorporated it into your mainstream development cycle (okay, deployment cycle). But if it somehow failed to grab your attention, let me bring you up to speed.
SFDX stands for Salesforce DX, and DX in turn stands for Developer Experience. As per Salesforce’s documentation, SFDX is “a new way to manage and develop apps on the Lightning Platform across their entire lifecycle”. And rightly so. SFDX brings a whole host of capabilities and features including authorisation, managing scratch orgs, packaging components, continuous integration, data migration and much more.
Picking the (sweetest) cherry
With all these capabilities, I am sure you can think of a ton of potential improvements in your current process, or maybe you want to address a particular pain point. But here is a catch. Deviating from the current process usually often requires development time, downtime, and user training, which often comes at the expense of features being delivered. When that happens, both Project Managers and Architects need to make a judgment call to ensure the following:
- This effort, diverted from the mainstream feature development, should only be used for creating a setup for addressing the most frequent and repeatedly time-consuming challenges.
- There should be a way to return to the existing process if anything unforeseen shows up in the new setup. In other words, the new setup should coexist happily with the existing setup.
- The benefit should outlive the cost, i.e., the development time and the learning curve should not be so great that a setup outlives its usefulness.
When we, at Picnic, first dabbled with SFDX in late 2018, we were also trying to solve a very interesting and quite possibly unique challenge. We too had faced these sorts of questions and conundrums. Let me elaborate.
In2018, Picnic was operational in 2 countries (now, we are operational in 3). Salesforce is a multi-org setup within Picnic, i.e., there exists 1 Salesforce org per country that Picnic operates in. Each country and thus each org-hierarchy (by org-hierarchy, I mean all sandboxes belonging to or cloned from a certain production environment) behaves independently. This means all orgs naturally have their own production environments, UAT environments, and dev environments. Well, scratch the last one. At Picnic, most of the features we build are common to all countries/orgs and thus, technically, we don’t need multiple dev environments—just one is enough.
So, our development cycle is something like this:
- Build in 1 developer sandbox and unit test it.
- Deploy in all 3 UAT environments (1 for each country), and invite users for UAT or BAT. If required, use these orgs for performance and integration testing.
- Finally, deploy in all 3 production environments.
Since our development model is slightly unique, our challenges too were also slightly different. Put simply:
- We need to deploy components not within the same org-hierarchy, but also in a different org-hierarchy.
- Since we need to do multiple deployments (3 deployments per release, one for each country), we need to deploy them all in parallel. This means we were looking for a way to replicate the deployment process without major changes. We also wanted it to be scalable so that if new orgs are added (both due to Picnic’s expansion to other countries or simply adding one more staging sandbox), the deployment time, and therefore effort, doesn’t increase manifold.
- All our metadata is version controlled and we wanted to continue in this way.
- We were fascinated by the concept of Scratch Orgs and wanted to have the possibility of including them in our development lifecycle.
Like all other Salesforce teams out there, we were heavily reliant on Change Sets for deployments, but that only led us halfway through. With Change Sets, we could not deploy beyond the org-hierarchy. That is, it’s impossible to deploy from, say, NL-Dev to DE-UAT. To circumvent this limitation, we had to rely on IDEs to “manually” move these components. In this step, we would select all the components in the IDE that we added in our Change Set and “deploy” to a target org of a different country, and thus a different org-hierarchy. In all fairness, this “deploy” was IDE’s way of actually copy-pasting the components from the source to the target org.
Besides this biggest limitation, this setup used to get the job done but was cumbersome at best and error-prone at worst:
- Change Sets were very time-consuming. It used to take us quite some time to find and add components in a Change Set; they also took their own sweet time to upload and become available to validate or deploy in the target environment. While validating them in the target environment, if we found out that we mistakenly failed to add a component, we would have to start over. Well, cloning a Change Set brought some respite, but still, upload time was not insignificant. To put this in perspective, it used to take us approximately a whole day to complete deployments.
- Change Sets could not be reused. Often, deployments are duplicative, i.e., the components deployed from a developer sandbox to a testing/staging environment will be the same that will be deployed from a testing/staging environment to production. This means to fully release a feature, we needed to re-create the exact Change Set in all lower sandboxes until they were deployed to production.
- Multi-org setup further aggravated the situation. We need to not only re-create Change Sets within an org-hierarchy, but to create them as many times as we have the orgs. You can imagine the time and effort required to deploy a feature in 3 orgs. We were also concerned that we could not introduce another org or staging environment without considerably increasing the deployment time and effort. Moreover, since this process involved a lot of manual steps, the higher the number of deployments, the higher the chance of error.
- Multiple technologies were in play. Deployments were made by using 2 separate technologies or processes — Change Sets and IDEs. Though it would not seem like a big issue, it resulted in multiple moving parts in the system.
Clearly, the existing setup was becoming a bit of a bottleneck. So instead of making minor changes to this setup, we took a step back. We were clear about the challenges we were facing and determined to find a smart solution. With that in mind, we started evaluating our development approach, or more precisely, Application Lifecycle Management model, and all the possible options that Salesforce offers. At the same time, we started exploring SFDX and the capabilities that it offers.
A few PDFs and many days later, we were able to summarise the 3 ALM models:
- Change Set Model
- Org Model
- Package Model
Change Set Model
Inthis model, we have development and staging environments besides production. The components move between these environments using Change Sets.
Inthis model too, we have development, staging, and production environments, but additionally, we (can) have scratch orgs. The major difference is that here, instead of Change Sets, SFDX CLI is used for moving components between environments.
This model is very similar to the Org model, except for the fact that components move in packages via SFDX CLI.
This model has many benefits over the Org Model, such as modularising the metadata. Eventually, we went on to implement this model after a lot of deliberation and architectural planning. We have written so much about this model and our journey in the following blogs:
- The Salesforce Makeover Project: A blog series | by Lorena Salamanca | Picnic Engineering
- Introducing a Multi-Org Property Orchestrator for SFDX Projects | by Svava Hildur Bjarnadóttir | Picnic Engineering
Evaluating ALM Models
* with a separate setup
** with version control in place
Thanks to our efforts in understanding and evaluating all the possible ALM models, we were able to make a much informed decision. The Org Model seemed to address all our challenges without requiring a huge effort or large deviation from our existing setup; this meant bigger gains with a smaller investment. Not only that, but it also paved the way for us to model and use Packages in the short future.
Org Model in Action
Until now, I have been very theoretical. n this section, I would like to share how we actually made a shift; what we really did to start using the Org Model for our deployments.
The entire deployment process can be arranged in 4 separate steps in sequence:
Getting your SF org and computer ready for deployment, and most importantly for SFDX CLI. For this you would need to:
- Install JDK 11
- Set environment variables. Fo ex. in macOS, it can be done by
- Install Salesforce CLI
- Create a local directory with sfdx-project.json and sub-directory force-app. One example of sfdx-project.json
If you create an SFDX project locally, then this structure is automatically created. To create an SFDX project, go to the desired location and key in
sfdx force:project:create --projectname
Once your initial setup is done, you can then move on to a bunch of authorisations.
- Authorise the source org and the target org
sfdx auth:web:login --instanceurl https://test.salesforce.com --setalias sfdx auth:web:login --instanceurl https://test.salesforce.com --setalias
If your orgs have domains enabled and are set to strict, then you can replace https://test.salesforce.com with your domain URL.
- Verify these connections
#3 Generating Manifest
Enlist metadata related to your features and developments that you wish to deploy from package.xml. A sample package.xml looks something like this:
Account-Some Lovely Layout
Custom_Object__c-Another Lovely Layout
There are 2 ways in which you can generate package.xml:
- Simply by looking at your Pull Requests (if you use some kind of version control). However, this is usually easier if your repository is in a DX Source format instead of a Metadata format. The most visible difference between the two is that all components of an object like fields, validation rules, etc., are combined in one large metadata file in the Metadata format, but are grouped in sub-dirs in Source format. Salesforce documentation has elaborated on this extensively.
- Using the Org Browser extension in Visual Studio Code. This lists all the components of your org and allows a GUI way to select them.
Now is the moment to witness these deployments in full action.
- Retrieve the changes defined in package.xml locally
sfdx force:source:retrieve --manifest --targetusername
You can also replace manifest and use
—-sourcepath to indicate a sub-dir to deploy all components of that sub-dir. For example,
—-sourcepath force-app/main/default/classes will retrieve all classes from the source org.
- Validate these changes in the target org
sfdx force:source:deploy --targetusername --checkonly --wait 0 --testlevel RunLocalTests --manifest
--wait 0 makes the process asynchronous and thus faster. As a response to this command, you will get an ID. Keep that one handy, you will need that in the next step. Here too, you can use
—-sourcepath to deploy all components of that sub-dir.
- To get the status of your deployment validation, key in
sfdx force:source:deploy:report --jobid -
Alternatively, you can also check the status using Deployment Status in Setup, just like you would do for Change Sets.
- Once the deployment is validated, deploy these changes in the target org
sfdx force:source:deploy --targetusername --wait 0 --validateddeployrequestid
Alternatively, you can also deploy from the org using Deployment Status in Setup.
- Similar to validation, to get the status of your deployment, key in
sfdx force:source:deploy:report --jobid --targetusername
Alternatively, you can also check the status using Deployment Status in Setup, just like you would do for Change Sets.
#4 Data Migration
Once you are done with deployments, and hopefully you didn’t have many manual post-deployment steps, you might need to go for some data migration.
- Export the data (in JSON) from the source org
sfdx force:data:tree:export --targetusername --query ““
- Import this data to target org
sfdx force:data:tree:import --targetusername --sobjecttreefiles
This new setup indeed made our lives much easier by streamlining our deployments and speeding them up, but while using them, we realised we can improve the process even further:
- Our repository was in the Metadata format and it wasn’t very transparent to 1–1 map the deployment manifest with the components in the repo. To address this, we converted our repo to Source format in steps by using
This improvement eased our transition from org-model to package model, which we went on to adopt later.
- When we reflected on this new deployment setup, we also realised that most of the effort and time was consumed in generating the deployment manifest or package.xml. Since we were using version control and all of our metadata was converted to Source format, we could simply build a tool (or a bash script) that can automatically generate the package.xml by looking at develop/master branch and comparing the git tags. In reality, we didn’t build this tool as we were already on the road to transitioning to Package model.
This whole exercise brought in some key learnings for us, which we would use in other projects as well:
- It’s important to be critical of the current setup to find areas of improvement, and to do this for a setup regardless of whether it has been mature for many years, or was recently introduced. In our case, we critiqued both our Change Set model and the recently introduced Org model.
- Before adopting a technology, it’s vital to ensure that it will indeed address the challenges and that the effort required to build and adopt is aligned with the pace of feature development and/or the vision of the team/organisation. For us, though Package model was the most comprehensive option, taking the first step with Org model gave us the opportunity to test this new technology and smoothen the transition.