When it comes to generative AI, you may be hearing the phrase “trust gap” crop up in more and more conversations around the Salesforce ecosystem.
I’ve been delving into Salesforce’s Einstein Trust Layer – what does it really mean in terms of the secure and ethical use of GenAI? Let’s take a look.
Does the Trust Gap Exist?
Every year, Salesforce publishes the State of the Connected Customer report. No surprise that this year’s edition focuses on how customer engagement with businesses has changed with artificial intelligence (AI), compiling findings from 14,000+ consumers and business buyers across 25 countries.
Standout findings include:
- Nearly three quarters of their customers are concerned about organizations using AI unethically.
- In the 2022 survey, 82% of business buyers and 65% of consumers were open to the use of AI to improve their experiences; now, these numbers have dropped to 73% and 51%, respectively.
- 68% of customers say advances in AI make it more important for companies to be trustworthy, putting a growing onus on brands to earn customers’ confidence as technology matures.
What’s the Einstein Trust Layer?
Due to the need for GenAI technology to be grounded in security, ethics, and human oversight, when bringing any GenAI innovations to the market, Salesforce have been thoughtful in their design.
The challenge has been to keep GenAI outputs contextual to your organization’s data, without sending that data outside of your trusted boundary – whether that’s your Salesforce org, or through secure gateways with other large language model (LLM) providers your organization wishes to leverage.
So, there’s a balance to be struck between contextualization, security, and flexibility in the range of LLMs that you could leverage.
Contextualization With Control
Say, for example, you ask ChatGPT to write an email targeted to CEOs of manufacturing companies in Germany. The output would be vague – not taking into account your product/service offering, who exactly is in your database, or their history of engagement with your organization.
Salesforce’s own LLM shines through with these types of prompts. The prompt (input) goes into Salesforce’s LLM, and is appended with key information from Data Cloud (the foundation that speeds up the connectivity between different ‘clouds’ across the platform). The output is grounded in your organization’s data, enabling users to ask more specific prompts and gain more specific answers back.
Salesforce has a multi-tenant architecture. Like an apartment building, all Salesforce customers (like tenants) can take advantage of the services the platform provides, while securely owning their data (like furniture in a locked apartment). In this context, organizations won’t be sending signals to LLM providers containing their customers’ data, and inputs/outputs from one organization do not benefit other Salesforce customers.
Salesforce Generative AI Gateway:
To remain compliant at every turn, Salesforce have built what’s known as the Generative AI Gateway. The gateway, as it sounds, channels inputs through before going out to other LLM providers.
When a user writes a prompt from the Salesforce UI that’s intended for an external LLM your organization has connected, it’s passed via the Connect API to the gateway. Here, pre-processing occurs (the Einstein Trust Layer), including PII masking so that sensitive information doesn’t leave the boundaries of your Salesforce org. Only then is the request routed to the appropriate handler.
AI Adoption With Trust
A number of Salesforce executives have used the quote “AI is the new UI”. Employees from all levels of the organization are looking to GenAI to transform the way they work – especially those in the C-Suite who don’t want to miss out on this tidal wave of change.
Aside from the big picture security we learned about above, the trust by design approach Salesforce have engineered is fundamental to aid GenAI adoption throughout the organization.
Prompt Studio:
Admins will be key stakeholders in how GenAI technology (Einstein GPT) is deployed and used within their Salesforce orgs.
Salesforce want to provide the ability for any admin to control how prompt inputs and outputs are generated, including reassurance over data privacy and reducing toxicity – that is, detecting potentially rude, disrespectful, or unreasonable outputs.
Salesforce Prompt Studio enables admins to create prompt templates, choose how they want to ‘ground’ the prompts in Salesforce data, and activate for users quickly and easily.
Human Oversight:
In the rush to implement GenAI, some employees are concerned about the validity of the outputs they receive back, as well as the long-term outlook of their role.
GenAI outputs, including those from Einstein GPT, are not 100% accurate. Outputs should be used more as foundations that then need to be validated by a human. Salesforce acknowledges this by keeping the ‘human in the loop’ at every turn in a user’s workflow – in other words, the human user can choose whether to accept or reject the output before it’s activated (e.g. a generated email is sent to the recipient).
You could consider this concept as guardrails, again, improving GenAI adoption by keeping employees reassured and accountable, also as stakeholders in the transformation.
Looking to the Future: Regulation
From a conversation with one Salesforce executive, we heard that Salesforce are staying abreast of the changing regulatory landscape internationally, continually keeping their engineering teams educated. Changes to regulation will be reflected by adapting the design of the Einstein GPT platform, such as the interface, data privacy, and storage.
The key takeaway here is that, with the Einstein Trust Layer, Salesforce have built a robust foundation that will be able to pivot in the future, as needed.