Artificial Intelligence / Admins

How Salesforce Admins Can Apply Data Governance to Einstein

By Jeremy Carmona

Last month, a client asked if they could “just turn on Einstein and see what happens”. I asked them back: “Would you flip on API access for your whole org without any documentation and just… see what happens?” The silence that followed told me everything I needed to know. I’ve been doing AI governance work for five Salesforce orgs over the past year, and here’s what keeps coming up: this isn’t some brand new discipline we all need to learn from scratch. It’s data governance with higher stakes and way faster consequences.

Think about it: if you’ve already built a consent management system, locked down field-level security, or written up data retention policies, you’ve got about 80% of what you need already. The main difference now is that AI moves faster, touches more records, and makes decisions you can’t always predict ahead of time.

You Already Have the Foundation

As a Salesforce admin, you’ve answered these three questions dozens of times:

  1. Who gets to see what? That’s your profiles, permission sets, sharing rules, etc. 
  2. What data can leave the org? Integration security, connected apps. How long are we keeping things? Retention policies, archiving strategies. 
  3. How do we prove we’re compliant? Field history tracking, audit trails.

AI governance asks the exact same questions – the only real difference is speed and scale. What takes a user ten minutes to query manually, Einstein can do in milliseconds across thousands of records. And a poorly set-up prompt can surface Personally Identifiable Information (PII) you thought was locked down.

If your data governance has gaps, Einstein’s going to find them – usually in front of your key stakeholders.

Five Questions to Ask in Every Audit

When I audit an org’s AI readiness, I use these five questions. If you can answer all of them confidently, you’re in good shape to deploy Einstein features.

1. What Data Can Einstein Actually See?

This sounds super basic, but I’ve watched orgs skip this step and regret it quickly. Some things you need to check are:

  • Which objects have AI features enabled?
  • What fields are you including in your prompt templates?
  •  Are your sensitive fields properly masked?

Here’s the good news: Einstein already respects field-level security. If a field is hidden by FLS, Einstein literally cannot read it, which means it can’t include it in analysis or responses. You don’t need to build a special field set, as your existing security controls work just fine.

What you actually control is in Prompt Builder. When you build templates, you pick which merge fields to include. That’s your control point. Only reference the fields that should be accessible to AI.

Prompt Builder template editor displaying field selection controls for AI accessible data.
Prompt Builder interface showing merge field selection dropdown.

Here’s a real example from a nonprofit client: They built an Einstein template for donor emails. Initial version referenced Annual_Income__c and Wealth_Rating__c. The first draft email said: “Thank you for your generous support. Based on your income bracket, we believe you’re positioned to consider a legacy gift of $500K.

While this is technically accurate, it’s wildly inappropriate for donor communication. It is fixable, however, by choosing better fields for your template.

2. Where Does Our Data Go?

Einstein has different features with different data handling, and it’s important to treat them separately.

  • Zero retention (Einstein Copilot with Trust Layer): This is architectural, not something you toggle on and off. Salesforce has contracts with LLM providers like OpenAI that guarantee customer data is never stored after processing and never used for model training. The LLM Gateway handles all this automatically.
  • Data grounding: Uses your Salesforce data to inform responses but keeps everything within your tenant through secure retrieval that respects your security controls.

How to check this:

  1. Setup → search “Einstein Setup” → click Einstein Setup.
  2. Toggle “Turn on Einstein” to On.
  3. Click “Go to Einstein Trust Layer”.
  4. Review the Large Language Model Data Masking toggle.
  5. Check Salesforce’s Trust documentation for their zero retention contracts.
Einstein Setup menu showing Turn on Einstein toggle and go to Einstein Trust Layer button.
Navigation path to Einstein Trust Layer configuration.

My rule of thumb is if you wouldn’t email the data in plain text to an external vendor, take a close look at how it’s being used in prompts.

Trust Layer page displaying Large Language Model Data Masking toggle and configuration options.
Einstein Trust Layer data masking settings.

3. Who Should Actually Use AI Features?

Don’t just enable Agentforce for everyone because you can. The better approach is to:

  • Create dedicated permission sets (Setup → Permission Sets).
  • Assign agent-specific permissions strategically (like “Use Agentforce SDR Agent”).
  • Review who has access every quarter.

The critical thing to understand is that Agentforce Agents operate within the user’s permission model. They respect field-level security and cannot access data that the user cannot access. However, agents can aggregate and analyze accessible data in ways that might surface insights users couldn’t directly query.

Focus on who should have AI access in the first place, not on building additional security layers you don’t need.

4. How Do We Monitor What’s Happening?

You have three separate monitoring systems.

 For Agentforce Agent conversations:

  • Setup → Agentforce Agents (under Agent Studio) → select your agent.
  • Open in Builder → Settings.
  • Check “Keep a record of conversations with enhanced event logs to review agent behavior.
Agentforce Agent Details tab showing Enrich event logs with conversation data.
 Agentforce Agent event logging configuration.

For Einstein Trust Layer audit data:

  • Setup → search “Einstein Audit”.
  • Enable “Einstein Audit, Analytics, and Monitoring Setup”.
  • Data goes into Data Cloud: prompts, masking events, toxicity scores, user feedback.
  • Build custom reports on this stuff.

For email/calendar sync (Einstein Activity Capture):

  • Setup → Einstein Activity Capture
  • Configure your sync settings
  • View the activity timeline data

The things I look out for are:

  • Unusual patterns in Agentforce Agent usage
  • High toxicity scores in the audit data
  • User feedback showing that the AI is giving inaccurate responses

5. What’s Our Rollback Plan?

Salesforce doesn’t give you emergency AI rollback procedures, so you have to build your own. Here’s what you can actually do.

Each AI feature has its own on/off switch:

  • Setup → Agentforce Agents → [select agent] → Deactivate.
  • Setup → Einstein Activity Capture (toggle it).
  • Setup → Einstein Search (feature level controls).
  • Remove permission sets instantly to cut off access.

Practice exercise: Write down which features you’ve enabled. In your sandbox, practice turning each one off. Time yourself. Make sure you know who has the authority to make that call.

Here’s a good real-life example. A nonprofit wanted Agentforce to draft thank-you emails for donors. Made sense on paper: save time, personalize at scale, let the development team focus on actual relationships. But what they’re at risk of launching is:

  • Agentforce Agent with default settings.
  • Prompt template: “Write a thank you email for {Contact.Name}”.
  • All Contact and Opportunity fields available in merge fields.

Here’s the test email it generated: “Dear Sarah, thank you for your $50K donation last week. We noticed this is your largest gift since your promotion to VP at Acme Corp. Your household’s $200K total giving history makes you one of our most valued partners.”

Accurate? Sure. Appropriate? Not even close. What we built instead was: 

  1. New prompt template in Prompt Builder—limited to First_Name, Preferred_Name, Last_Gift_Date.
  2. Custom permission set called “AI Donor Communications”.
  3. Assigned it to three trained development staff.
  4. Enabled agent conversation logging.
  5. Made human review mandatory before sending.

This took us just two days to complete. Six months later:

  • Donor communications are clean and appropriate.
  • The board approved expanding Agentforce to other areas.
  • Zero PII exposure incidents.
  • The development team saves about four hours per week.

Good governance doesn’t slow you down – it lets you move faster because you’re confident. So before you turn anything on, run through this checklist:

  • Field-level security configured for sensitive data
  • Prompt templates only use appropriate merge fields
  • Permission sets created (not profile level access)
  • Agent conversation logging enabled
  • Trust Layer audit data collection turned on in Data Cloud
  • Rollback procedure documented and tested in sandbox
  • At least one human review point in every AI-assisted workflow

Final Thoughts

AI governance isn’t a one-time project – it’s ongoing, just like data governance.

Make sure to look out for my next piece, where I’ll walk through how to build custom guardrails using Flow and validation rules so your team can actually innovate with Einstein without needing legal review for every single new prompt.

What’s your biggest concern about Einstein security? Drop it in the comments. Your questions literally shape what I write next!

Resources

The Author

Jeremy Carmona

Jeremy is a former journalist turned Salesforce Application Architect. He is a 13x certified consultant and NYU instructor. He founded Clear Concise Consulting, and specializes in AI governance, data quality, Flow, and nonprofits.

Leave a Reply