Help us improve Gearset
We love getting feedback from our users on how we can make Gearset even better. Post your ideas for improvements, new features, and bug fixes alike, and vote for others – let us know what’s important to you.
74 results found
-
Provide the ability to get a deployment report on multiple deployment packages
Instead of having to download the deployment report individually for multiple deployments, I would like to be able to select multiple deployments and download one report that contains a list of all files that were deployed as part of the multiple deployments. It is too cumbersome to manually download a report or .csv for each deployment and then copy/paste the info into one location so that I can get a comprehensive list of all files that were deployed.
7 votes -
Allow out of order Update deployment
Currently you can only deselect and select update elements in an order. For example I have 3 recent updates ordered 1 to 3. One being the oldest and first update and 3 being the newest and last update. The current functionality would require me to select item 1 if I wanted to also update item 2. And if I wanted to update item 3 I would have to select all the items before it.
I would like to select each one individual no matter there deployment history. Often I need to do this because the updates have no relation to…
1 vote -
Include CPQ Favorite objects in the supported objects
CPQ includes Favorite objects as part of the package but Gearset doesn't support them. These can be shared proactively with users so they don't have to make their own Favorites. We make these in a sandbox along with new CPQ product configurations and would like to be able to push these up to production along with the new configs.
1 vote -
Mask data that already exists in an environment
At the moment, we know that you can mask data whilst seeding it into an environment. However, what we would like a way to mask data that already exists in an environment.
A use case is that we have a full sandbox that contains customer data that we want to mask without needing to seed the data in first.
It's a common use case for organisations that deal with 3rd party suppliers but do not want their customer data exposed to the 3rd parties. For instance, a Salesforce implementation partner is delivering a project. They need access to the environments to build and…
1 vote -
Allow filtering of records for data deployments via a SOQL
Would like to be able to run a SOQL to select the records to base a data deployment from, in many cases we are trying to build some test data that matches specific scenarios based on fields on parent or related objects.
The current selection of field filters doesn't give us the flexibility we need. For most of our data deployments to sandboxes today we run a data load in prod to set a field to true on the base object from a SOQL and then use that field to filter in the data deployment.
2 votes -
Include duplicate rules in "Disable validation rules, triggers and flows"
Here's a powershell script I hacked-up to do this from sfdx directories in vs code:
$duplicateRules = ".\force-app\main\default\duplicateRules"
$temporary = -join("duplicateRulessave",[System.Guid]::NewGuid())Save repository versions
New-Item -ItemType Directory -Path $temporary
Move-Item -Path $duplicateRules*.* -Destination $temporaryRetrieve org versions
sf project retrieve start --metadata DuplicateRule
Make deactivated versions
$files = Get-ChildItem -Path $duplicateRules -Filter "*.xml"
foreach ($file in $files) {
$content = Get-Content $file.FullName
if ($content -match "<isActive>true</isActive>") {
$content -replace "<isActive>true</isActive>", "<isActive>false</isActive>" | Set-Content $file.FullName
} else {
Remove-Item $file.FullName
}
}Deploy deactivated versions
sf project deploy start --metadata DuplicateRule
Restore repository versions
Remove-Item $duplicateRules*.* | Where-Object{!($_.PSIsContainer)}
Move-Item…1 vote -
In a data deploy, allow filtering data by previously deployed parent objects.
For example, instead of "Only deploy Opportunity records that are children of the Account.Opportunities records that are being deployed" which filters opportunities by the accounts in the same deploy. You would have two deploys. The first would deploy accounts. The second would filter opportunities by those accounts already in the target org.
This would allow you to modularize deploys. You might want to mix and match subsets of data for specific projects. It would allow working around some of the shortcomings of the account hierarchy gearset magic. If there were transient errors (like exclusive locks) you could repeat a smaller…
1 vote -
deploy user records via REST api
Currently there is no way to deploy user records via Gearset because the Data Deployment feature uses the Bulk API. This is painful for those cases when we have sandbox testers that we need to move between sandboxes!! Please add the ability to deploy users via REST API.
1 vote -
Change Order of Steps for Data Deploys
When configuring a data deployment, currently "Planning Deployment Steps" is the last step, but this is where I often encounter the "Error while planning deployment" because of recursion/etc. That means I need to back up through the previous 4 steps, adjust, and then go BACK through the related objects, data masking, disable rules steps before it will try again.
The disable rules step in particular takes some time, making this a really tedious cycle to repeat.
I understand needing the related objects step before validating the deployment, but if the order changed to:
- Select Objects,
- Select Related Objects
- Plan Deployment…
1 vote -
Please introduce an external ID in the background as some other products do. This would make it flawless.
Without the external ID, your solution is time consuming and prone to troubleshooting errors.
1 vote -
add a column for file sizes and show a warning message if over the limit (might happen deploying content assets)
Recently when deploying a large amount of content assets, I ran into a issue where the deployment would freeze on caching metadata for faster comparison.
Support told me the deployment size was too large and I broke it up into smaller deployments. I suggest adding a column with file sizes and perhaps a warning message if you exceed the deployment limits.
3 votes -
Create a link in the main menu under Data Deployments for managing Data Deployment Templates. It is not very intuitive to manage them curren
Create a link in the main menu under Data Deployments for managing Data Deployment Templates. Having to start a deployment in order to manage the templates is not very intuitive.
1 vote -
Add navigation options to the data deployment process
We are making use of the data deployment tool to seed our sandboxes with test data. When you execute a template it always navigates you to the last step (which makes sense). There are times however that we need to go back to the 2nd step and change filter criteria. This includes several clicks and waiting for pages to load.
Is it possible to add a navigation bar at the top that shows the various steps so you can quickly jump to one you need to update?
ie Step 1 > Step 2 > Step 3 > etc1 vote -
Enable Field Level Mapping for Data Deployments
Currently there is no field mapping option available for Data Deployment of Objects. This is a big blocker if we want to migrate selective fields data across Salesforce instances.
17 votes -
Implement Data migration support for nCino
nCino is a native Salesforce application for banking customers, configured using records, similar to how CPQ is configured. Deploying the data based configuration from environment to environment can be very difficult and painful. This would be a valuable feature for every nCino customer
3 votes -
1 vote
-
extend the GearSet public API to include Data deployments between Sandboxes and Scratch Orgs
Allow triggering Data deployments via the public API to eliminate having to sync data manually. This also includes triggering creation of Scratch Orgs via the Gearset API so that Gearset can instantly recognize them them.
The workflow we would like to automate is: feature branch is checked out which triggers a CI/CD system pipeline to create a corresponding scratch org, do a metadata deployment from a test sandbox, and sync data from the same test sandbox. This allows us to create on-demand full copies of the environment for purposes of E2E testing (inc. external integrations) and feature development that requires…
8 votes -
Allow record data load from CSV file
While it is nice to deploy record data from org to org with Gearset, it would also be nice to be able to deploy data from a CSV file, with the ability to custom map the headers. Native Salesforce Dataloader has this capability, as well as many 3rd party tools out there. Please incorporate this capability into Gearset.
8 votes -
Provide More Detailed Information on Data Deployment Summary Steps
Currently the data deployment summary displays an elaborate interface but with very little useful information other than record counts. On the right side are Steps, but those steps do not tell the user much. I can see that in one step x records were fetched, for example, and in another step y records were fetched, but I can't see WHY.
Example use case: I had a situation where I was filtering an object but the filter wasn't being honored (more records were being deployed than expected). I could see from the data deployment summary that one of the steps was…
1 vote -
Add a way so we can easily integrate commits into Salesforce.
What I mean by this is, give users the ability to create and associate commits from in Gearset to a custom Object in Salesforce.
The commit would create a new record in Salesforce, under the custom object, let's call it 'Commits'.
We'd be able to pass through data such as the commit status, the ID of the commit, a link to the commit itself taking us back to Gearset, the name, and author, etc, etc...
I'd imagine this is possible through the use of the BitBucket API, however with SF teams who lack developer resources, this would be a nice…
1 vote
- Don't see your idea?