Salesforce customers found themselves at the center of one of the most wide-ranging enterprise data theft campaigns in recent memory last year. Several household brands across retail, automotive, insurance, and technology disclosed breaches involving millions of records, FBI alerts were issued, lawsuits followed – and yet, the one point that was repeatedly emphasized was that Salesforce itself had not been hacked.
Instead, attackers linked to hacking groups such as ShinyHunters relied on social engineering, voice phishing, and malicious replicas of legitimate Salesforce tooling to trick employees into granting access to their own environments. In several cases, compromised connected apps were then used to quietly query and exfiltrate sensitive data at scale, before victims were hit with extortion demands.
At first glance, the explanation for these incidents usually boiled down to human error. Salesforce reiterated that no platform vulnerability had been exploited, and many companies described the incidents as third-party or “CRM-related” breaches. But as the campaign dragged on and the list of affected organizations grew, that explanation began to feel incomplete.
Why did attackers keep specifically targeting Salesforce orgs? And why did similar techniques succeed across so many unrelated industries? To answer these questions, SF Ben spoke to multiple industry experts and revisited the mechanics of the attacks themselves. From this, what emerged was that this may have less to do with careless users, but more to do with identity complexity, scale, and a security model that works well until it doesn’t.
Salesforce Was an Efficiency Target, Not a Vulnerable One
Looking from the outside in, it’s easy to assume that the wave of breaches affecting Salesforce customers must point to a flaw in the platform itself. But when you speak to people who have a good understanding of how these attackers might think, you’re looking at a situation where it’s far less about technical vulnerabilities and more about efficiency.
Peter Chittum, Technical Content Director at SF Ben, describes the campaign bluntly: “To me, this is just an efficiency play. Salesforce has hundreds of thousands of customers… any user can potentially be used to compromise basic access, and then it’s just a numbers game.
“You get 1,000 compromises across 1,000 orgs. Some of those might have really good security… but out of that 1,000, maybe five don’t. And that user literally has access to every piece of data in that org. Boom – you’ve scored.”
This is where Salesforce becomes uniquely attractive. Unlike a single-tenant enterprise system, there’s no shared alarm bell, and one compromised org doesn’t automatically trigger scrutiny elsewhere.
As Peter puts it, Salesforce’s “distributed nature of authority” – where customers fully own and control their data – is both a core feature and, at scale, something attackers can easily exploit. This isolation also subsequently slows down detection time.
“You compromise one org, maybe that org doesn’t say anything for a long time,” Peter explained. “Or they’re embarrassed. You compromise 20 or 50, and finally people start paying attention.”
Tim Combridge, Technical Content Writer at SF Ben, added another interesting layer to why Salesforce kept coming up in breaches, looking at the value of what really sits inside it.
“Salesforce is the location for all corporate data, all CRM data,” Tim said. “And now with Data Cloud, MuleSoft, and all these integrations, you’re not just stealing records – you’re creating a pathway into other insights as well.”
In other words, Salesforce is more than just where customer data lives, but where you’ll also find pipelines, partner relationships, and increasingly, integrated data from across the enterprise.
Tim also pointed out a simple but familiar pattern within modern cybercrime – once a technique works, it’s very likely to get repeated.
“If you see a door being kicked down and then you see it kicked down again, you think, ‘maybe I can have a go as well.’ Once it rains, it pours.”
It’s important to note that neither frames Salesforce as being negligent. In fact, Peter is careful to avoid that conclusion, noting that Salesforce had flagged the issue internally months earlier. But both agree that scale changes everything – when a platform becomes as large as Salesforce has, even low-success-rate attacks become commercially viable, and social engineering, unlike zero-day exploits, scales as well as they do.
That’s why this campaign didn’t rely on breaking Salesforce, necessarily. It actually relied on the same human-focused techniques across many isolated environments, knowing that a small percentage of them would always be exploitable.
Connected Apps Were Essential – And That’s Exactly the Problem
Once attackers gained a foothold inside Salesforce orgs, they didn’t need to exploit any obscure vulnerabilities or bypass any encryption. In many cases, they simply used the platform exactly how it was designed to be used – which is through connected apps.
Peter made it clear that connected apps themselves are not the ‘villain’ in this story, stating: “Connected apps are totally essential. The problem with enterprise software is all of these silos – when you need to connect something to your Salesforce silo so that they talk to each other, you absolutely need a way to do that that is secure.”
Salesforce’s connected app model, built on OAuth, is a genuine improvement on what has come before. Instead of handing over usernames, passwords, or API tokens, OAuth allows access to be scoped, monitored, and revoked at the session level.
“OAuth is really good because you can authorize a particular service to access your org in a particular way,” Peter detailed. “Those are called OAuth scopes. You can revoke that one OAuth session and let everything else stay open for that user – that’s all really, really good.”
Over time, connected apps become something that most orgs implicitly trust. Once an app is authorized and ‘working’, it can often fade into the background. This then leads to some level of neglect, where the app is rarely reviewed, rarely challenged, and potentially not fully understood by the people responsible for securing the org.
As Peter puts it: “I would guess a huge number of organizations didn’t – and probably still don’t – really understand the potential risk of giving somebody access via a connected app.”
This blind spot is extremely important to what we saw in the latter stages of last year. In the 2025 hacking campaign, attackers distributed malicious replicas of legitimate tools – notably Data Loader – and convinced a large number of users to authorize them. From an org perspective, these requests look valid. But from the attacker’s perspective, the authorization unlocked everything they needed.
Peter points to one OAuth flow in particular as a weak link: “There were a couple of problems – one of which Salesforce has now changed – and that was the device flow. The amount of information you provided to get access was minimal, and it opened up access a bit too much.”
Salesforce has since apparently hardened that flow, but the wider lesson remains that authorization flows are only as strong as the context around them. If users aren’t fully aware of what they’re approving – and if admins aren’t regularly auditing what’s already been approved – OAuth becomes a quietly escalating issue rather than a safety net.
Tim reinforces that this issue isn’t confined to obscure tooling or malicious actors impersonating apps, stating: “Salesforce gives you a system administrator profile that has a lot of permissions on it – not every system administrator needs every permission in that profile.”
In many orgs, powerful access may actually be granted by default, often rarely stripped back or taken away. When a connected app is authorized under a highly privileged user, the ‘blast radius’ then grows exponentially. This results in trusted users authorizing trusted-looking apps, which are then trusted indefinitely.
“Identity is hard,” Peter said. “The people who really, really deeply understand identity are a very rare breed. I think our nature is that once you think that thing is trustworthy, you just tend to trust it.”
In that sense, connected apps didn’t fail Salesforce customers, but revealed how fragile implicit trust looks on a larger scale, and how easily legitimate security models can be turned into attack vectors quickly.
Complexity, Human Error, and Why the Safety Breaks Didn’t Trigger Sooner
By the time attackers were exfiltrating data from Salesforce orgs, the damage was often already done. The question many organizations were left asking wasn’t how attackers got in, but why they weren’t spotted sooner.
According to Beech Horn, Technology Engagement Manager and Architect at Banham Patent Locks, a major part of the answer lies in just how complex Salesforce environments have become over time.
“Salesforce has been around for a long time,” Beech explained. “We’re up to 26 years now. That means there’s a lot of tech debt, a lot of history, and a lot of bad practices that have built up over time.”
While this may be obvious to point out, most Salesforce orgs weren’t designed in a single moment. They’ve evolved over time, shaped by changing business needs, new features, and years of configuration. As a result, understanding how secure an org really is has become increasingly difficult.
“It’s a complicated product,” Beech explained. “There are so many layers – Flow, LWC, Apex, integrations, APIs, OmniStudio. Trying to track how data is secure through all of those layers is very hard.
“You can ask, ‘is this Flow secure? Is this Apex secure? Is this LWC secure? But there’s nothing that pierces through all the layers end to end.”
Moreover, Beech mentions that this challenge is made worse by current resourcing constraints.
“You end up with orgs where there’s just one member of staff doing administration,” Beech said. “There aren’t dedicated Salesforce security staff. There isn’t budget for it.”
In theory, security should be something everyone owns. In practice, Beech is blunt about the current reality.
“There’s this idea that there shouldn’t be a security role – that security should just be baked in,” he said. “I’d love that. But until everything is self-sufficient and self-reporting, that role is a necessary evil.”
This context matters when examining how social engineering attacks played out. While many breaches involve someone being tricked into an action, Beech argues that focusing on individual mistakes misses the bigger issue, and the more important question is what happens after this error occurs.
“Human error is always going to happen, [but] you want safety brakes. You want them to kick in sooner and sooner.”
In many of last year’s incidents, attackers were able to move through orgs, access large volumes of data, and extract it without triggering immediate alarms. That wasn’t because nobody cared, but because many organizations lacked the visibility needed to spot abnormal behavior quickly.
“You can’t make it the customer’s responsibility without giving them the capability,” Beech argued.
Tools that provide that visibility – such as Event Monitoring or advanced security controls – are often difficult to configure, require specialist knowledge, or sit behind additional licenses.
“Event Monitoring is a paid extra, and I think Salesforce needs to give us more of those capabilities and remove the price barrier entirely from event monitoring – I think it should fold [into the core offering].
I know it’s 10% – It’s going to be a juicy amount of revenue, and I appreciate Shield is like 40% extra on what you’re paying, so that would then reduce down to say 30% if you took it out. But I don’t understand how customers are secure without it.
“The other one is Data Detect, which comes free if you buy all of Shield. I’d like to see it become part of the core offering as well.”
Tim reinforces this point from a slightly different angle, noting that responsibility without implementation only goes so far. Salesforce can provide security tooling, but organizations still need the knowledge and visibility to use it effectively, especially as attackers become more sophisticated.
As a result, detection frequently lagged behind compromise. And once data had already left an org, the opportunity to prevent damage had passed.
Beech is clear that the answer isn’t eliminating human error, understanding that’s unrealistic.
“You can’t get rid of human error,” he said. “But you can at least detect things sooner.”
That means better observability and clearer signals that automatically assume mistakes will happen in your org.
“Salesforce reflects and magnifies whatever gaps you already have in your business processes,” Beech said. “It scales them up.”
One concept that helps explain what was missing in many affected orgs is the idea of canaries, which are deliberately planted tripwires designed to detect suspicious behavior early. Rather than trying to prevent every possible mistake, canaries assume that something will eventually go wrong and focus on spotting it fast.
Beech explained: “I’m a big believer in things like canary records – little trip wires that tell you someone’s moving through the system in a way they shouldn’t be.”
In practice, this might mean fake records, monitored access patterns, or alerts that trigger when data is queried unusually. The goal isn’t perfection, but rather an early warning – reducing the time between compromise and detection, before meaningful damage is caused.
Overall, at Salesforce’s current scale, this reality helps explain why these attacks caused so much damage, and why the focus is now shifting away from prevention alone, toward visibility, detection, and faster response.
Final Thoughts
The breaches that affected Salesforce customers last year weren’t driven by a single vulnerability or careless mistake. They were the result of scale, complexity, and trust colliding in environments that have become increasingly difficult to understand and monitor.
Salesforce itself wasn’t compromised, but the assumptions many organizations made about visibility and shared responsibility were stress-tested at a large and public scale. As attackers continue to favor social engineering over technical exploits, the lesson is that security can’t rely on perfect behavior.
Going forward, resilience will depend less on prevention alone and more on how quickly orgs can detect misuse, limit impact, and respond when trust breaks down.
SF Ben note: We at SF Ben strongly recommend that all admins and org owners prioritize auditing the connected apps currently in use in their orgs. This includes identifying the origin of all connected apps, removing any unused or unknown apps, setting permissions for access to remaining apps, and removing the ability for any user to add connected apps without approval. We’ve published an article to help.