Artificial Intelligence / Security

Best Practices for Safely Deploying Salesforce Einstein Copilot

By Nathan Coppinger

Branded content with Varonis

While the rollout of Einstein Copilot brings great leaps in productivity and streamlines processes, like other gen AI tools, it also comes with risks that you must take the necessary steps to mitigate. Without a proper deployment plan and understanding of how Einstein Copilot works, your organization could face more risks than rewards.

In this article, we’ll break down the Einstein Trust Layer, how to protect sensitive data in Salesforce from being exposed using Einstein Copilot, and best practices for deploying the AI-powered assistant. 

The Einstein Trust Layer

Salesforce is committed to securing the data that customers process through Einstein Copilot. To do this, they’ve developed the Einstein Trust Layer.

Customer data flowing through Einstein Copilot is encrypted within the Trust Layer and none of that data is retained on the backend. Any sensitive data like PII, PCI, and PHI is also masked.

The Einstein Trust Layer will also attempt to reduce the amount of biased, toxic, and unethical responses through its toxic language detection capabilities, reducing the burden on the end user.

Salesforce has stated it won’t use customer data to train the LLMs behind Einstein Copilot and it won’t be sold to third parties.

The Einstein Trust Layer ensures your data is safe. (Source)

Protecting Your Salesforce Data – A Shared Responsibility

One of the key components of Salesforce security is its shared responsibility model. The shared responsibility model defines the roles and responsibilities of Salesforce and its customers regarding the secure use of data, AI, and the overall platform.

In this model, Salesforce is responsible for securing the infrastructure, platform, and services that enable AI (as shown by the Einstein Trust Layer) and the secure processing of customer data through Einstein Copilot.

At the same time, customers are responsible for securing the applications and configurations that connect to the AI, including:

  • Permissions – Einstein Copilot will surface all organizational data that an individual user can access.
  • Data – Einstein Copilot relies on up-to-date data to provide high-quality and accurate results.
  • Usage – Customers must ensure Einstein Copilot is used properly and responsibly.

This ensures both parties work together to form the highest level of security and trust.

The shared responsibility model between customers and cloud service providers (CSP) like Salesforce. (Source)

Best Practices to Prepare Your Salesforce Orgs For Einstein Copilot

Lock Down Permissions to Sensitive Data

Einstein Copilot inherits the access and permissions of the Salesforce user, so it’s imperative to mitigate risk by locking down critical data, ensuring that each user (and thereby Einstein Copilot) can only access what they need to do their job.

To understand each user’s permissions, you’ll need to parse their:

  • Profile
  • Permission Sets
  • Permission Set Groups
  • Role/hierarchy
  • Muted permissions

However, Salesforce permissions are highly complex and require significant effort to analyze and understand – especially considering a large enterprise can have up to 1,000 Permission Sets with dozens of permissions in each one.

On top of that, security teams must rely on Salesforce teams to help them complete this process, and because Salesforce Admins have their plates full with keeping the business running, completing this process can be overwhelming.

Update and Purge Old Internal Data and Documentation

Einstein Copilot relies on your internal documentation and data to ground generative AI prompts with helpful context and provide accurate and relevant information.

As Salesforce says, “Good AI starts with great data.”

Einstein Copilot pulls data from Salesforce Data Cloud, which unifies multiple data sources, including your Salesforce environment and cloud storage (like AWS and Snowflake).

Data is the source of truth for generative AI, and to ensure the best Einstein Copilot experience and reduce the risk of hallucination, your data needs to be:

  • Secure
  • Available
  • Clean
  • Timely

Along with ensuring your permissions are locked down and correct, you should also perform an initial record and documentation review across the data stores Einstein Copilot pulls from and update or purge out-of-date, stale, and inaccurate information.

Then, you can set up a regular review process to keep your internal documentation clean and up to date. 

How Einstein Copilot uses your data to build gen AI experiences in Salesforce. (Source)

Identify Sensitive Data That AI Shouldn’t Access

There is bound to be data in your environment that you don’t want Einstein Copilot to be trained on or surface answers from; with Salesforce, you can create zones that section off data you don’t want Einstein Copilot to access. However, it’s up to the customer to determine what that data is and where it lives. 

Ensure Proper Use

Many departments – from support to marketing – will use Einstein Copilot to generate customer and public-facing content. However, as we mentioned previously, the quality and accuracy of AI output often rely on the quality of the input. 

Salesforce’s Prompt Builder ensures your users are generating proper responses from the AI. This feature enables admins to set up guard rails for specific processes within the workflow (for example, customer support responses) to ensure appropriate, on-topic, and quality AI output.

The Prompt Builder will provide the user with a template to feed into Einstein Copilot, dynamically grounding the prompt with information like customer names, accounts, context, and relevant articles that may further help the AI’s response.

Create prompt guardrails through the Einstein Trust Layer. (Source)

This will also help you safeguard against prompt injection attacks in which a malicious actor tries to provide instructions that trick the model into giving a response it shouldn’t. 

Prepare Your Salesforce Orgs For Einstein Copilot With Varonis

Before you start your AI journey with Einstein Copilot, it’s essential you understand your Salesforce security posture and ensure that your data is prepared for a safe and smooth rollout.

The Varonis Data Security Platform helps organizations gain an overview of their Salesforce security posture by:

  • Greatly simplifying permissions analysis.
  • Automatically discovering and classifying sensitive data.
  • Surfacing stale data.
  • Identifying critical misconfiguration.
  • Managing third-party app risk.
  • Continuously monitoring sensitive data activity and detecting risky behavior.
  • Integrating with and enhancing Salesforce Shield.

Try Varonis for Free

Varonis can help your organization prepare for a safe and smooth Einstein Copilot rollout. 

Request a demo today and get started with a complementary Salesforce risk assessment. Getting started is free and easy, and the results are yours to keep.

The Author

Nathan Coppinger

Nathan is the Product Marketing Manager at Varonis.

Leave a Reply

Comments:

    Freddy Fernandis
    October 11, 2024 6:47 am
    I must congratulate team for writing an amazing blog on Einstein Copilot, your content is really helpful in terms of Einstein AI implementation.
    Vishal
    October 17, 2024 1:31 am
    Can anyone share if there is way to personalize Einstein Agents name to something brand specific and not just show up as Einstein for the end user ?