AI agents are quickly becoming autonomous systems capable of taking real action across enterprise software, which is creating huge opportunities for productivity. All the while, it’s also introducing new risks that many companies are only beginning to understand.
A recent AI security whitepaper from AIUC-1 Consortium has highlighted how quickly these risks are emerging, with numbers pointing to a growing gap between AI adoption and AI governance. For companies using platforms like Salesforce – where Agentforce is continuing to encourage automation and integration – that gap could pose some real financial consequences if not managed correctly.
So rather than looking at these statistics in isolation, it’s worth looking at what these risks actually look like in a Salesforce environment. Based on the findings of the whitepaper, here are four ways poorly governed AI agents could accidentally cost Salesforce customers serious money.
1. Billion-Dollar Enterprises Are Already Losing Millions to AI Agent Failures
One of the most striking findings from the whitepaper is the financial impact that poorly governed AI agents are already having on large organizations.
According to the research, 64% of billion-dollar enterprises reported losing more than $1M due to AI agent failures in the past year.
The report links these incidents to a broader shift in how AI is being used across organizations. Rather than just generating suggestions or summarizing information, it’s now increasingly executing tasks and interacting with enterprise software, which naturally opens up its own can of worms, so to speak.
Depending on how they’re configured, Agentforce agents in Salesforce are usually expected to retrieve data, update records, or interact with other systems to complete prompts. So any agent acting on incorrect context, or operating with overly broad permissions, could accidentally trigger changes across the platform and make mistakes very quickly.
As Salesforce CTA Svet Voloshin explains, the biggest risk usually isn’t a single incorrect action, but the combination of automation and access.
“The most realistic financial risks come from scale and automation combined with bad permissions,” Svet said. “An AI agent with excessive access could unintentionally create or modify thousands of records, trigger automation loops, or generate large volumes of API calls that drive up platform costs.”
In essence, the ‘danger’ isn’t necessarily a dramatic system failure. It’s more likely to be small mistakes like incorrect updates or a misinterpreted task that gets repeated automatically across thousands of records before anyone notices.
And inside more complex Salesforce orgs, that kind of cascade can quickly become expensive to clean up.
2. Shadow AI is Growing Faster Than Governance
Another key finding from the research highlights a visibility problem that many organizations are only beginning to understand.
The whitepaper states that the average enterprise is running around 1,200 unofficial AI applications, while 86% of organizations say they have little to no visibility into how those tools access or use company data.
This phenomenon – often referred to as “shadow AI” – is becoming one of the biggest governance challenges facing enterprise tech teams.
In reality, this usually doesn’t always mean rogue tools or malicious activity. Many employees may still be experimenting with different tools, given we’re in an age with so many options at our disposal. However, connecting chatbots to external systems, pasting company data into personal AI accounts, or building lightweight agents that interact with enterprise APIs do bring risks.
The whitepaper also uncovered that 63% of employees who used AI in 2025 pasted sensitive company data into personal chatbots, including source code, customer data, and internal documents.
This runs a particularly sensitive risk for companies running Salesforce. Naturally, giving unofficial AI tools any relative access to deep and sensitive company data means operating outside of the governance and permission structures that Salesforce Admins carefully design. Even orgs that do have Agentforce and use it in a controlled way may have other tools running alongside it without knowing.
The whitepaper also notes that this lack of visibility is one of the main reasons shadow AI incidents can be so costly. When a problem occurs, organizations often struggle to determine what systems were involved, what data was accessed, and how the failure happened in the first place.
For Salesforce teams, understanding what AI tools employees are using and how they’re using them could be a huge difference maker when it comes to both protecting orgs and saving costly mishaps.
3. Most Organizations Don’t Govern AI Agents Like Users
Another major governance gap highlighted in the research comes down to how organizations are treating AI identities.
80% of organizations report risky AI agent behaviors such as improper data exposure or unauthorized system access, yet only 21% of execs say they have full visibility into what their agents are doing – what they actually access, what tools they use, and what permissions they hold. This lack of oversight, in turn, creates quite a big identity problem, per the whitepaper.
It goes without saying that AI agents won’t behave like human employees, operating continuously across multiple systems at machine speed. The whitepaper notes that many agents also create non-human identities – such as service accounts, API keys, and authentication tokens – that can quietly accumulate permissions without the same safeguards applied to human users.
In Salesforce environments, that issue can surface when agents are continuously given access to CRM data and workflows without carefully defined permissions.
Agentforce agents, for example, will need to retrieve different types of data. If permissions around that data aren’t tightly controlled, the agent effectively becomes another system identity interacting with Salesforce data and automation.
That’s why Svet argued that agents should now be treated the same way as integrations.
He said: “An AI agent should absolutely be treated like a system integration user or service account, not like a human user. The core principle is strict least-privilege access with clearly scoped permissions tied to the exact tasks the agent is expected to perform.”
Without those controls, agents can quietly accumulate access across systems. When something then goes wrong, the scale and speed at which they operate can quickly turn a small configuration mistake into a much larger operational problem.
4. Untrusted Inputs Could Turn Salesforce Agents Against the Business
While the other points around AI risk focus on what agents do, another growing concern is what they actually listen to.
The final key aspect of the whitepaper highlights prompt injection as one of the fastest-emerging threats in enterprise AI. In simple terms, this is where an AI system is fed misleading or potentially malicious instructions through the data it consumes – whether that’s documents, emails, or external content – causing it to behave in unintended ways.
In Salesforce environments, where agents are being designed to pull context from a number of sources, this becomes a serious risk.
An Agentforce agent, for example, might retrieve information from case notes, articles, Slack messages, or even external/unknown systems to complete a task (as mentioned in point 2). The difference here is what this data actually does and how it influences the output of an agent. If any of that data contains manipulated or misleading instructions, the agent will struggle to distinguish between legitimate concerns and malicious inputs.
This might be why we’re seeing a rise in the number of Forward Deployed Engineers (FDEs) being hired in the tech space. An FDE is responsible for working with customers to make sure AI systems are running safely and are suitable for real-world environments, and, according to Salesforce, job postings for this role have spiked 800% in the last nine months.
Naturally, we’ll likely see more FDEs emerge over the next few years as AI becomes more embedded in day-to-day operations. Otherwise, you may risk having agents act on untrustworthy data, proving to be a waste of time and investment.
Final Thoughts
AI agents aren’t going anywhere anytime soon, and some clear risks are already starting to emerge that have the potential to damage enterprises in several ways – financially, but also reputationally if things go horribly wrong.
Ultimately, these findings highlight what needs to be done to scale AI agents safely in the Salesforce ecosystem and avoid the kinds of costly failures that are already beginning to surface.
For more Salesforce conversations and industry insights, check out the latest episode of the Picklist podcast below.