What’s Brewing at Slack? Controversy Over AI Training Policy
By Ben McCarthy
May 21, 2024
One of the biggest topics of conversation surrounding artificial intelligence at the moment is not whether it’s useful or not, but if we can trust it at all. Trust can mean multiple things when discussing AI: Can we trust the outputs? Can we trust AI companies with our data?
Slack has found itself at the center of this discussion, as it came to light that they may be using customer data to train AI models. According to Salesforce, this is just a huge misunderstanding…
Slack Drama
Salesforce and Slack have been under scrutiny from the tech community over the past few days, as some of the clauses within their AI Privacy Principles have been challenged. A discovery was first made by a user on Hacker News, who found out that you had to explicitly email Slack to opt out of their AI training models. Salesforce has since updated its Privacy Principles page, but the messaging used to read:
“To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other Information (including usage information) as defined in our privacy policy and in your customer agreement.”
Of course, this messaging throws up immediate red flags – it sounds like Salesforce is using its own customer data to train LLMs (large language models).
Since its inception in 1999, Salesforce, and subsequently Slack, has viewed trust as one of its core values. Trust is also the core word that Salesforce has been using in its latest rebranding, to position itself as the go-to platform to utilize AI whilst protecting customer data with its Trust Layer.
Thus, unsurprisingly, this discovery has been compounded in the media and the wider tech ecosystem.
Putting The Record Straight
Salesforce and Slack were quick to respond to this allegation. For a company that has built up 25 years of customer trust, prioritizing profit over trust would diminish this trusting relationship quite quickly and it was vital that they addressed it.
In this instance, it looks as though the Privacy Principles webpage was out of date and not reflective of their current guidelines. This mistake has been confirmed by a spokesperson at Salesforce, as well as publicly by a Director at Slack, Aaron Maurer, on Threads.
Since the news initially came out (May 17th), Salesforce has published a blog post in response explaining exactly how Slack uses AI, and the language in the Privacy Principles has been updated.
Below, is the information that we have been provided by a Salesforce spokesperson that confirms that customer data is not used to train LLMs.
Slack has industry-standard platform-level machine learning models to make the product experience better for customers, like channel and emoji recommendations and search results. These models do not access original message content in DMs, private channels, or public channels to make these suggestions. We do not build or train these models in such a way that they can learn, memorize, or be able to reproduce customer data.
We do not develop LLMs or other generative models using customer data.
Slack uses generative AI in its Slack AI product offering, leveraging third-party LLMs. No customer data is used to train third-party LLMs.
Slack AI uses off-the-shelf LLMs where the models don’t retain customer data. Additionally, because Slack AI hosts these models on its own AWS infrastructure, customer data never leaves Slack’s trust boundary, and the providers of the LLM never have any access to the customer data.
Summary
Incidents like these show you how quickly an AI scandal can blow up, and how trust can be lost in an instant. Luckily, for Salesforce and Slack, this was just a misunderstanding. I’m sure many other enterprise-sized companies will be reviewing their privacy policies to ensure that everything is clear and above board.
However, if you still feel uncomfortable with anything included in this article of Salesforce’s recent blog post, you can opt out by having your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line “Slack global model opt-out request”.
The Author
Ben McCarthy
Ben is the Founder of Salesforce Ben. He also works as a Non-Exec Director & Advisor for various companies within the Salesforce Ecosystem.