Developers / Events

Data, AI, and New Tools: Developer Announcements from Dreamforce ’24

By Peter Chittum

Keynotes at Dreamforce are usually high-production value events that are heavily scripted and programmed to within an inch of their life. However, the developer keynote this year was a different affair. Whether by design or by accident (it was combined with the Admins keynote at first), this keynote was in the room furthest from the core Dreamforce action and took place in the twilight hours of Dreamforce 2024. 

Gone was the vast in-the-round style keynote experience. The presentation included very few slides (maybe fewer than ten?), and there were no claims of how many million Salesforce developers there now are. Instead, we were treated to a from-the-hip code-oriented demofest, that showcased working software.

Frankly, this is what I would hope from a developer keynote. More code and APIs; less storyline. More demos and fewer slides. More real features and less Figma. This last point was laid bare when demo driver Alba Rivas’ demo failed to work, so speaker and product manager Ananya Jha ad-libbed, “We’re actually not using Figma for any of this.” With some deft troubleshooting, Alba rescued the demo and they were off again. 

Another criticism you’ll hear about keynotes at Dreamforce is that too many features shown aren’t yet generally available (GA). I didn’t get to watch a lot of other keynotes, but we can just do the numbers of what the team demoed to see how they rated. Of the fourteen features highlighted in the slides, ten of them are GA today. Another three will be GA within the next two months. Only one beta feature was listed in the slides (although they did discuss one further developer preview feature).

All in all, this sounds like the setting for what could be a very good developer-focused session. Let’s get on to the features you’re waiting to hear about. 

The keynote was broken into the usual three chapters: a section on Data Cloud, another on how you might use AI to build apps, and a third on developer tools including how AI might help you do your job. 

Data Cloud

The broad theme of this topic requires no explanation. Data Cloud has been the main drum being beaten by Salesforce this past year. 

Data Cloud One

Due to be GA in October 2024, Data Cloud One presents a much more optimal integration architecture when using Data Cloud across several orgs. Previously if you had two orgs and wanted both orgs to access Data Cloud data, each would require its own Data Cloud instance. Integration, data streams, identity resolution, etc. would need to be built for both.

Data Cloud One will give you the ability to have a single shared Data Cloud instance in which each connected org can access the data through a bi-directional multi-org sharing. 

SOQL for Data Model Objects

Data Model Objects (DMOs) are the analog of a Salesforce object in Data Cloud. Previously, querying these required the use of a SQL string query passed in through a method surfaced through the ConnectApi. For Salesforce developers used to using SOQL, this was a bit of an oddity. It also lacked some inline SOQL features like compile time type, reference checking, and query variable binding. 

Querying a data cloud data model object with SOQL.

You can now query DMOs using inline SOQL. Make note of the variable binding being used in the example. They didn’t cite compile time type or reference checking. I don’t want to assume, so if you’ve had a chance to test this, be sure to shout it out. There are also some specific concerns about security and data access for Data Cloud queries, so read up on that before you accidentally write some code that surfaces data with the wrong user. 

In the demo example, they showed three queries run in succession of unified records, original source records, and some related data. If that seems clunky, you’re right – nicely leading us to the next feature. 

Data Graphs

With so much data unified from separate sources, the complexity can compound quickly. Data graphs allow you to define specific related member DMOs to be accessible from a single query along with the specific fields needed for that graph. Think of it a bit as a more refined hierarchical version of a traditional database view.

A preview of the JSON that would be returned for a data graph.

While not as flexible as the standard “query what you want when you want it” features of SOQL relationship queries, it does give developers a better tool than making multiple query calls. Predefining them is likely to be critical to their usage in the Sub-Second Realtime features of Data Cloud. 

Data graphs are accessible via the Salesforce REST API under the services /data/v61.0/ssot/data-graphs endpoint. 

Vector Database and Search

Vector databases (sometimes referred to vector stores) are a useful tool for storing, organizing, and retrieving unstructured data such as documents and images. For text, they can be complementary to traditional full-text search using a meaning-based search called semantic search. In the world of generative AI, vector stores are a critical component in how LLMs infer meaning between words and can help ground model inference using retrieval augmented generation (RAG). 

Data Cloud now has the ability to store unstructured data in its own vector store. This could prove helpful to customers who have not already invested in one when implementing Data Cloud.

The Data Cloud vector store is foundational to Data Cloud’s semantic search, which could help retrieve documents based on the words that have been embedded in the vector. There is a beta hybrid search which supplements the vector search with a simultaneous classic text search. It can also be part of RAG-based prompt grounding. 

A Note on Usage and Costs

I think it bears remembering that anything you do with Data Cloud, like a SOQL query, or the Sub-Second Realtime Platform may fall under one of its billable usage types. As developers, we’re used to hacking together a SOQL query, loading data, and having pretty much every operation we do included in a cost-per-user model with high-cost operations being metered by limits. 

While building, architecting, and costing based on usage might be standard for any developer working in public cloud, this is likely to require a major change of mindset for the typical Salesforce-only developer or project team. 

Jay Hurst concluded his section of the keynote, characterizing Data Cloud as the “backbone for everything we are going to be building on this platform for the next decade.” So if cost-per-usage bakes your noodle, you’ve been forewarned. 

The future of the Salesforce platform is here and it is called Data Cloud. 

Building With AI

The second section focused on all the features and products you can use to create an app with AI. 

New Models in Model Builder

While model builder is not new, new models are produced frequently by both proprietary and open-source model providers. For the first time, Salesforce has released a model in model builder hosted on Hyperforce. Claude 3 Haiku is the smallest of Anthropic’s current generation of LLM. 

It’s a model that does a good job of balancing speed, benchmark performance, token size, efficiency, security, and cost. This is a great choice for Salesforce-hosted models given the concerns Salesforce has to weigh when rolling out technology. 

CRM LLM Benchmarks

This may all sound great but don’t take my word for it. The industry standard for objective model evaluation is to use a benchmark. So Salesforce AI research has published the first CRM-specific benchmark, balancing for measures of accuracy, cost, speed, trust, and safety. 

APIs for your AI Everywhere

The next section gave a whole bunch of examples of how many places the AI functionality you build with Salesforce is automatically API-ready. 

This is great news and demonstrates the AI product teams have not dropped off the high commitment Salesforce has to make features API accessible. Have you spent a lot of time building that perfect LLM utterance in a prompt template that’s working perfectly for your agents, but now you want to surface it to your homegrown field service app? No problem, it’ll be there behind an API where you can invoke it just as easily as you can from an Apex class or Lightning Web Component. 

Some examples of APIs we saw were:

  • Generating a response from a prompt template via a REST call.
  • Dynamically generating a prompt in JavaScript and combining deterministic output with AI-generated output.
  • Controlling hyperparameters like temperature and token length via the API.
  • Invoking a model generation from Apex as the backing for an LWC.

The Models API itself is in Beta, so you can test it today. The four endpoints include: 

  • Generate standalone text
  • Generate chat (multi-segment conversation)
  • Generate embeddings
  • Submit feedback

Basically, if Salesforce built it, they built it with an API. And from what they shared, you have access to that API as well. 

Model History

One of my favorite features is a nicely packaged history and feedback feature. Models you use in Salesforce can store all past prompt input and generation output. You can also view feedback that’s been given to the model over time via a set of standard reports. This not a trivial feature to incorporate into an LLM-based app – bundling it into a clean API is a nice touch. 

As you might expect, this history and feedback are also surfaced to the REST API.

Tools

The last chapter focused on developer tools. Here again, there is some AI of course, but this time it’s the AI that helps you do your job. 

Agentforce for Developers

What used to be Einstein for Developers has been rebranded and expanded to become Agentforce for Developers. 

But what did they expand? Previously, Einstein for developers had inline automatic code autocompletion and a single prompt option. I’ll be honest, I’ve found the inline autocomplete distracting to the point I had to disable it. I wanted to see if there was a way I could invoke it with a keyboard shortcut. But having a chattering bot spitting ghost text at me every time I pause typing just didn’t work for my mindset when I code. Your mileage may vary of course, but I digress. 

Inline autocomplete is still there, but now the helper feature allows you to have a multi-step conversation with Einstein. I think this makes sense – I expect there could be a lot of back and forth on a prompt to get the right completion. 

They’ve also added three / commands: 

  • /explain: summarise some code
  • /test: write some tests
  • /document

I’ve not had great experiences with code generation with previous versions of Einstein for Developers, but I haven’t had a chance to try out the /test or /document options. I did try out /explain on a few bits of code I had lying around and was pleasantly surprised by what it was able to see and infer. 

It did miss one tiny thing: when asking to describe a method that was overloaded, it described the functionality of the base method well, but neglected to mention it being overloaded at all. It’s pretty minor, but I’d be curious how that improves over time. 

They’ve also added some invisible features such as RAG for your local project contents to better ground requests on the reality of your code and metadata. Salesforce guarantees that your proprietary code will never be used to further train or fine-tune models when using Einstein for Developers, ensuring the safety of your company’s intellectual property. 

LWC Local Development Revisited

LWC local development was made available as a beta in 2019. It was then deprioritized and abandoned, so it’s great to see an old friend come back with new shiny features – kind of like R2D2 after the poor droid got on the wrong end of a phaser blast. 

The investment here shows. Developers will appreciate hot module reloading, meaning updates to your local project will automatically refresh in the UI after you save (no deployment necessary). They’ve also included local dev for mobile and experience cloud sites. 

I used the original local dev feature quite a lot. It was very cool to edit and see web UI updates in the same way other web frameworks had been doing for years at that time. But it was also awkward. The UI did not resemble your orgs UI and it was a generic locally run web server that would serve up a single LWC on its own. 

Digging into the DOM tree or setting up JS debugging was also challenging, as the JS object hierarchy was so fundamentally different from how it ran when served from Salesforce.

The new local dev has at least improved on the UI. Here, we see local dev from the demo, and what looks like a decent facsimile of the standard Salesforce UI; a great improvement. 

Local development for LWC showing an almost standard Salesforce UI.

How it will go hunting through JS objects to find the right line of code to debug remains to be seen. I find it unlikely that it will be identical to running a page served from Salesforce’s servers, but you never know. Again, contact me if you find this out as I’d love to know. 

Scale Center and Apex Guru

I confess – Scale Center came out at a time when I was very distracted by trying to learn all things AI, so I had to go back through history to understand. Scale Center was released about a year ago (Winter ‘24) and is available automatically to Unlimited Edition customers on their production and full copy sandboxes. 

It provides a range of metrics that allow you to understand the performance health of your org, as well as to be able to turn on tracing for a limited number of users and run investigations. In later releases, they added additional performance metrics and slightly expanded which customers could use automatically. 

Upon finding an issue, one way to look at root causes is ApexGuru. ApexGuru uses LLMs to analyze code and look for potential problems. It was made GA in Spring ‘24 and has been trained to find new problems like SOQL queries in loops, inefficient query filters, and operations, expensive string operations, and debug statements. 

This was my first time viewing someone explaining ApexGuru, but my reaction to some of the problems was, “Couldn’t you have found that with good static code analysis and without the cost of an LLM before you moved it to production?” 

And the answer is “Yes…and?”. ApexGuru in Code Analyzer is now in pre-release (presumably still for the unlimited edition), so you could run ApexGuru as part of a CI process. I suspect also that some of the problems it is designed to catch might evade most static code rules. 

Bonus: Get Hands-On With the Coral Cloud Sample App

Before we wrap things up – if you’ve attended any of the AI-Now Tour events across the world in the past year, you’ll be familiar with the Coral Cloud application use case which those trainings are built around. 

This is now available to install in your own org from the Developer Relations sample app gallery. Having a Data Cloud-enabled org is costly for Salesforce, so it’s a time-limited org (two weeks). Instructions for provisioning the right org are found in the Coral Cloud Trailhead Quick Start project. This was also the use case that was woven through all the keynote demos. 

The “Explore the Coral Cloud Sample App” quick start in Trailhead.

Summary 

For a Dreamforce dominated by AI and Data Cloud, this keynote feels about right. Adoption of AI and Data Cloud across the Salesforce customer base will have a lot of influence on whether any one developer uses some of these features. 

But as usual, the evolution of the developer toolset marches forward. 

The Author

Peter Chittum

Peter is a self-taught software developer. He worked at Salesforce for 12 years and is now a freelancer working in developer relations and client advisory.

Leave a Reply