
AI News
04 May 2025
Read 6 min
Salesforce Enhances AI Model Lineup to Power Agentic Systems
Salesforce supercharges AI with tools that act, automate tasks, and boost team productivity safely.
Salesforce Boosts AI to Support Smarter Business Tools
Salesforce is growing its artificial intelligence (AI) options to handle higher-level decision making. The company is moving beyond simple AI answers to tools that can take action on their own based on business needs. These changes focus on introducing agentic AI systems, improving performance benchmarks, and strengthening built-in safety measures.
What Is an Agentic AI System?
An agentic AI system does more than answer questions. It performs tasks automatically. Instead of just giving users information, it carries out steps toward solving a problem or completing a task. These AIs take actions like replying to emails, updating systems, or adjusting schedules, all with limited human input.
How It Helps Businesses
By improving these systems, Salesforce wants to help businesses:
- Save employee time by automating tasks.
- Reduce errors by using smarter decision tools.
- Serve customers faster through AI support agents.
Large Action Models (LAMs): The Next Step
To support agentic AI, Salesforce is using a new kind of model called Large Action Models (LAMs). These are different from Large Language Models (LLMs), which only process and create text. LAMs are trained not just to talk, but also to act.
What Makes LAMs Unique
Unlike traditional AI tools, LAMs respond with actions. These actions may include:
- Updating customer records.
- Sending follow-up emails.
- Filing tickets or alerts in internal systems.
- Triggering tasks in connected apps like Slack or Google Calendar.
This lets businesses move from AI-generated suggestions to real, useful steps carried out in real-time.
Trust Guardrails Are Built In
Salesforce knows that not all AI results are accurate or safe. That’s why the company is building guardrails to keep its AI tools reliable. These guardrails are like digital safety nets. They stop AIs from performing harmful or wrong actions.
What Guardrails Do
Trust features help control the output and include:
- Rules that prevent bias in the AI’s responses.
- Filters that stop harmful or offensive content.
- Checks to make sure actions meet company policy.
- Logs that let admins review what AI tools have done.
These steps help protect companies using generative AI tools across departments—such as marketing, customer service, and IT.
Setting New AI Standards
Salesforce is part of a group effort to make clear rules on how AI should work in business. The company helped introduce new measurement standards through an industry benchmark called Dolma. It is an open-source dataset used to test AI model quality.
Why Benchmarks Matter
Benchmarks help make sure AI systems:
- Perform to expectations in real settings.
- Do not fail under pressure or with large amounts of data.
- Work fairly across different user groups and languages.
By using benchmarks, Salesforce can compare its LAMs and LLMs against others and keep improving.
Improving Human-AI Teamwork
One goal of Salesforce’s AI upgrade is to support teams, not replace them. These smart systems are designed to work alongside real workers. Instead of taking over, they help employees make faster and better choices.
Example Use Cases
For example, in customer service:
- An AI agent takes in a complaint from a user.
- It collects customer data automatically.
- It drafts a reply and starts a ticket in the system.
- A live agent reviews and sends the final response.
In this flow, the AI saves time and improves speed, but the human worker is still in charge of approval and final actions.
Einstein Copilot Gets an Upgrade
Salesforce plans to use LAMs inside Einstein Copilot, its smart assistant for businesses. This means Copilot will not only offer answers and next steps but also act on those steps. These may include creating new leads in a CRM system or editing product listings online.
Why It Matters for Customers
By making Copilot smarter, users can:
- Get faster support and issue resolution.
- Streamline tasks in customer management software.
- Automate common business routines.
This drives efficiency in both sales and service operations.
Keeping AI Responsible and Transparent
Salesforce says its AI systems are designed to be responsible. That means the company is focused on privacy, fair results, and explainable actions. Businesses using their AI tools should be able to trust decisions made by the system.
Key Features Supporting Trust
- Clear records of AI actions for auditing.
- Human control over sensitive processes.
- Alerts if the AI model is unsure or makes low-confidence decisions.
- Options for businesses to adjust safety levels and privacy controls.
With these in place, companies are more likely to use AI with confidence.
Looking Ahead: Smarter Tools, Safer AI
The AI space is moving fast, and Salesforce wants to stay ahead. Their shift toward action-based AI tools shows they are thinking about what businesses need tomorrow, not just today. By mixing reliable models, high standards, and human-focused design, Salesforce aims to lead safe and strong AI development for business use.
What This Means for Companies
Businesses using Salesforce can expect:
- More tools to save time through automation.
- Better team support from smart assistant features.
- Confidence that AI use stays within rules and standards.
As more companies rely on AI to stay competitive, these updates will likely play an important role in setting the future stage for enterprise technology.
For more news: Click Here
Contents