Help us improve Gearset
We love getting feedback from our users on how we can make Gearset even better. Post your ideas for improvements, new features, and bug fixes alike, and vote for others – let us know what’s important to you.
-
Enable Field Level Mapping for Data Deployments
Currently there is no field mapping option available for Data Deployment of Objects. This is a big blocker if we want to migrate selective fields data across Salesforce instances.
9 votes -
extend the GearSet public API to include Data deployments between Sandboxes and Scratch Orgs
Allow triggering Data deployments via the public API to eliminate having to sync data manually. This also includes triggering creation of Scratch Orgs via the Gearset API so that Gearset can instantly recognize them them.
The workflow we would like to automate is: feature branch is checked out which triggers a CI/CD system pipeline to create a corresponding scratch org, do a metadata deployment from a test sandbox, and sync data from the same test sandbox. This allows us to create on-demand full copies of the environment for purposes of E2E testing (inc. external integrations) and feature development that requires…
5 votes -
Allow record data load from CSV file
While it is nice to deploy record data from org to org with Gearset, it would also be nice to be able to deploy data from a CSV file, with the ability to custom map the headers. Native Salesforce Dataloader has this capability, as well as many 3rd party tools out there. Please incorporate this capability into Gearset.
6 votes -
I would like to be able to deploy directly from environment variables.
This feature would allow me to setup each sandbox environment after doing a refresh from production.
3 votes -
Data Deployment Template that supports complex rules
Looking to be able to setup a Data Deployment template that supports complex rules with record limits.
For example...
- Select 100 accounts that have been modified recently that belong to division A that contain at least 1 contact, 1 opportunity and 1 activity.
- Select 100 accounts that have been modified recently that belong to division B that contain at least 1 contact, 1 opportunity and 1 activity.
- Select 100 accounts that have been modified recently that belong to division C that contain at least 1 contact, 1 opportunity and 1 activity.Currently have to build 3 or…
2 votes -
Allow Data Deployment to handle more complex structures
Currently, user is able to select Obj A, Obj B that references Obj A, Obj C that references Obj B. And, in the next step of data deployment it allows user to select Objects where Obj A, B or C, and objects references by those objects.
If we ignore above example, I want to be able to select: objA(id), objB(id, objA), objC(id, objB, objD), objD(Id), objE(Id, objD). Where the first object I select and everything else is filtered on is objA.
1 vote -
Provide a better error when "invalid or null picklist values" is thrown.
"invalid or null picklist values" is thrown and i have too many picklists to figure out which one is the issue. Please provide speicific fields that are causing errors.
1 vote -
Add Flow Definition to Disable Rules for Data Deploy
With the deprecation of Worfklow Rules, I'd love to have the disable Rules include the active Flow Definitions
2 votes -
add an option to bypass all automation (flows/workflows/process builders) during data deployments
The Salesforce Import Wizard has an option to not trigger any automation during data jobs. It would be fantastic if Gearset provided such an option.
This would include:
- Workflows
- Process builders
- FlowsI know this feature already exists for validation rules and triggers.
1 vote -
Data deployments should ask if you want to Insert or Update or Insert & Update data
Data deployments should ask if you want to Insert or Update or Insert & Update data.
Currently it always performs data upsert only
1 vote -
One-click Sandbox data deployment of all objects
I think a great feature would be for the tool to be able to copy all object data from one org to a sandbox, and if the amount of data is too large for the sandbox, allow the user to specify what objects to take sample data from - and how - i.e. take no data from a certain object,or take 5% of the latest created data, etc. It would be helpful to show the objects that have the most data.
1 vote -
whenDeploying metadata and linking it JIRA Issue it will be good idea to have more Jira Issue fields visible not just Jira Issue key(id).l
when Deploying metadata and linking it JIRA Issue it will be a good idea to have more Jira Issue fields visible not just the Jira Issue key(id).l would like to have Issue "Summary" field Visible
1 vote -
Ability to mark(starring or adding to favourites) deployed package for quick reference
Once the deployment is done, if i want to clone the same pkg after 2 months, then its difficult for me to search through. Instead if there is any way to mark it by adding a star to adding to any favourite list will make it more handy and also available for quick reference.
1 vote -
Ability to activate / deactivate Flows / Processes as part of the meta data and/or data deployment process
It should be possible to have gearset to deactivate flows and/or processes based on a selection. And to roll-back afterwards. Similar to what has been created here: https://sfswitch.herokuapp.com/
10 votes -
Allow masking of State and Country to recognize API values
If the State and Country lists have been updated so the API values are set to ISO country codes the Masking of those fields during a data deployment does not work.
Data Load errors: '"FIELDINTEGRITYEXCEPTION:There's a problem with this country, even though it may appear correct. Please select a country/territory from the list of valid countries.: Billing Country:BillingCountry --"'Disabling masking works around the issue.
It would be great if the application could check the State/Country list and utilize the API values for masking.1 vote -
Allow the capability to deploy to multiple Salesforce organizations within one deployment
Currently you can only push to 1 Salesforce org, however, we manage multiple organizations and would like the capability to push the same package to multiple organizations
1 vote -
Replicating Data should include backups as a source
Currently its possible to replicate data from a backup job by going into the backup run and clicking replicate data.
However, this is extra steps... its would be good if you could just select backups as a source from the data deployment screen.
1 vote -
Create a Data Deployment Template for migrating Territory Management (1.0 & 2.0)
I would like to see a gearset defined template for Territory Management in Salesforce. I am definitely in need of this functionality as it seems to be a post deployment step once you deployed all Territory Management metadata.
in ETM 2.0 for example, after deploying the related metadata you need to deploy all related records for UserTerritory2Association Object, this contains who are the users assigned to a territory.
Not a big template but can be a good thing to have as a preset.
3 votes -
Add option to make picklist fields non-restricted before performing a data deployment
Similar to being able to disable validation before performing a data deployment it would be very useful if there was an option to automatically uncheck the "Restrict picklist to the values defined in the value set" option for some or all of the picklist fields in a deployment.
We have data that includes picklist values that are no longer active this causes errors on individual records as part of the load. Currently I have to pick through all the errors manually then uncheck the option for the corresponding picklists fields on the target org. Then after performing the deployment I…
1 vote -
Allow serial deployment instead of parallel
When executing a data deployment, an 'Unable to lock row' error can occur if multiple transactions try to lock the same record in Salesforce.
Because the batches are executed parallel, the chances of getting this error high, depending on the situation.
For example the master record in a master:detail relationship will always get locked when a DML is executed on the child record.
If two batches are executed at the same time and they both have at least 1 record that references the same parent, then a locking error will occur.3 votes
- Don't see your idea?