Salesforce is working on a new AI platform that automatically studies end users and takes action on their behalf, according to reports.
The project, code-named ‘Agent Albert’, is set to be unveiled ‘by the end of this year’, according to an article which was posted on X by Salesforce Founder and CEO Marc Benioff, seemingly confirming the claims made.
What Is Salesforce’s Agent Albert?
Agent Albert is, according to the WSJ, the culmination of an effort that began three years ago when Benioff brought about a standing Saturday meeting to charge Salesforce’s AI project, having been galvanized by ChatGPT’s debut.
While Salesforce first launched its own AI research unit in 2014, the sudden appearance of OpenAI’s controversial new LLM in 2022 caught the CRM company somewhat off guard.
Benioff reportedly held a three-day Salesforce Tower meeting in San Francisco along with around 40 executives from the company in early 2023.
They were said to have created an internal group chat called ‘AI at Salesforce’, intending to pivot towards making way for artificial intelligence in the company’s business model.
Steve Fisher, president and chief product officer, said that Benioff and other executives would spend hours every Saturday working on the move for months.
The next year, Agentforce was unveiled at Dreamforce ‘24, with several ramped versions being released over the following months – and it’s very apparent at this point that Salesforce has pivoted hard into the AI race.
Now, by the end of 2026, Salesforce “plans to unveil a new AI platform that automatically studies its users and takes actions on their behalf,” according to the WSJ article shared by Benioff.
The phrasing of “by the end of this year” seems to imply a Dreamforce ‘26 announcement. It’s worth noting that this was not mentioned at Trailblazer DX last week, to our knowledge.
SF Ben has asked Salesforce for comment.
Is Being Studied by an AI Cool or Creepy?
While details are scant at the moment, what little we have reminds me of the “creepy-to-cool” spectrum for understanding technology.
Tools like LLMs are, on the surface, cool. Being able to type in a prompt and see lines upon lines of code come out, or a checklist of things to pack for your vacation, is very useful.
As someone who conducts interviews on a regular basis for my work, it’s hard to express just how convenient it is to have not only a full transcript of what was said during the call automatically generated, but also an intelligent summary of each point we addressed. This, when compared to the days of listening back to a recording and manually typing out each word, key by painstaking key, has made my working day much, much less tedious.
On the other hand, I do now have to feel slightly paranoid about using phrases that have a stink of the AI-generated to them, such as, “It’s not just X, it’s also Y”, or an em-dash – which is an incredibly useful punctuation mark.
Tech companies are keen to shove our faces in just how convenient AI tools can be, and also tell us how bright the future looks, with monotonous and tedious tasks handled automatically.
Corporations want us to think AI is cool because they have an interest in us believing that, for the most part. That doesn’t mean they’re wrong, though a hint of skepticism is always necessary.
Final Thoughts
In any case, it falls to individuals, advocacy groups, and the media to point out when things veer towards the creepy. An AI platform that takes actions on a user’s behalf can, theoretically, be quite cool, I suspect. Sometimes we forget to do things that we said we would do in meetings, and having an agent automatically follow up for you would be to the benefit of many people.
But I wonder if people really do desire to be “studied”, and have actions taken on their “behalf”, on a human, instinctual level – in their workplace, or out of it.