Salesforce has recently come under fire from its own community of users over a recently discovered setting within Setup that shows the company uses customer data to train “global predictive AI models”. This setting is turned on by default.
This new setting, introduced in the Spring ‘26 release, has given this particular customer data sharing method more visibility, sparking concerns over what customers and users have actually consented to.
Why Toggle Off?
In the Spring ‘26 release notes, Salesforce revealed that users could now use a self-service toggle in the Setup UI to opt out of allowing Salesforce to access Customer Data in their orgs. Before this, customers would have needed to submit a support case to opt out.
Salesforce detailed that if users are opted in, customer data in an org could be used to train global predictive AI models for use, improve the services and features the users have access to, and conduct research into and development of new services and features. This is outlined in Salesforce’s Main Services Agreement (MSA).
If users were to opt out using the toggle in Setup, no new data will be shared. Salesforce retains collected data for up to 30 days before it is deleted and can’t be recovered, and no new Salesforce employee will view your Customer Data outside of a support case, a pilot, or as otherwise described in legal agreements.
Access to customer data is turned on by default unless you’re using Government Cloud or your organization has already decided not to share this data with Salesforce. This premise is not entirely unheard of – Meta, for example, uses user-generated content from its platforms to train AI – but where Salesforce is different is that Meta uses public or semi-public data. Salesforce’s community is concerned that it may be using much more confidential business data.
A Salesforce spokesperson told SF Ben that the company’s access to customer data is governed by each customer’s legal agreement and is used for specific purposes tied to the services and features customers use, including “predictive AI models and service improvements.”
“Where permitted, Salesforce gives customers controls to manage this access; to give customers greater visibility and make this easier to manage, we recently made the option available in Setup for eligible customers to turn off customer data sharing,” they said.
“Customer data remains subject to Salesforce’s confidentiality and security commitments, and this update does not affect our zero-data-retention policy with third-party LLMs.”
Fears Over Data Usage
While many SaaS companies reserve the right to use customer data for product improvement in their terms, most leading AI vendors – including OpenAI, Anthropic, and Google – have moved toward an explicit opt-in model for training on enterprise data. Salesforce’s approach stands out not because it is unprecedented, but because it appears to run counter to that emerging industry norm.
Salesforce also famously bases the majority of its work, including AI work, on the value of trust, which is perhaps why this addition has stung that much more for community members.
Pablo Gonzalez, a Salesforce Security Expert, shared his thoughts regarding this on a LinkedIn post highlighting fears around the data usage.
“The ‘Trust Layer’ is actually the name of one of the meeting rooms in the San Francisco HQ,” he said. “It’s where the marketing team sits.”
Christopher Hickman, a Global Enterprise Solution Architect, shared similar thoughts, asking: “Wasn’t this supposed to be something Salesforce explicitly didn’t do, the whole reason you pay Salesforce for your AI instead of someone else/’the trust layer’?”
Cecilia Chiderski, a Solution Architect and the author of the original post on this topic, also highlighted how she felt this had not been communicated, which poses another question: how much is Salesforce accountable for here?
“This Has Been the Case for Years”
Francis Pindar, a Salesforce MVP Hall of Famer and Salesforce Architect, also took to LinkedIn on the matter to provide some context.
“Seeing loads of posts today with people freaking out that Salesforce is defaulting on sharing customer data to train their AI models,” he wrote. “This has been the case for years.”
“But honestly? The AI training thing is just one example of a bigger problem. It’s all already in the contracts. Every time you spin up a Salesforce org, you agree to the MSA. And almost nobody reads it.”
“So here’s a question: have you actually read what you signed?”
He highlighted an adjacent, equally critical issue: the importance of reading up on terms and conditions, and understanding what you sign when you sign up for a product or service. Pindar also listed out multiple other agreements that users consent to once they agree to the MSA, including no warranty with products in Beta and permissions around feedback.
“In the Salesforce ecosystem, the defaults always favour Salesforce,” he wrote. “The burden to protect yourself falls entirely on you.”
Andrew Russo, a VP of Business Systems at BACA Systems and a notable Salesforce voice, shared that within his small to medium business (SMB), the details around customer usage for AI have been clearly documented for them, with references able to be traced back to MSA agreements as early as 2018.
“Salesforce didn’t opt you in by default; you opted yourself in when you used specific features related to AI,” he told SF Ben.
Paul Battisson, the CEO of Groundwork Apps, told SF Ben that it is important for customers to be aware of this setting as well as the implications of it.
“Salesforce is not selling your data but using it to improve their products,” he said. “I think everyone should review the documentation and the ‘what is being used and why’ to help them understand the impacts and consequences.”
“If you are working in a regulated industry or with particularly sensitive data, then it should be something you want to look into today.”
Final Thoughts
This particular outpour from the Salesforce community emphasizes the importance of understanding where data is going and what it is being used for, especially if the data in an org is confidential or sensitive. This should encourage businesses to reevaluate their data strategies to ensure their data is only in the hands of those they permit it for.
It definitely could be argued that Salesforce should have been clearer with this new update in Setup, and there are still questions as to why it is an opt-out service rather than opt-in, so it will be interesting to see if that eventually changes.