10/28/2024 | Press release | Distributed by Public on 10/28/2024 16:04
The era of agentic AI is officially here with Agentforce. The rise of AI over the past few years has been extraordinary, and at this important inflection point, it is critical to consider the impacts of emerging technologies on the humans who are using them.
As we build tools that delegate routine tasks to autonomous AI agents, allowing people to lean in on more high-risk, high-judgment decisions, trust must take center stage. At Salesforce, we're prioritizing building trust between humans and agents with responsible AI principles that guide our design, development, and use of agentic AI.
At Salesforce, we're prioritizing building trust between humans and agents with responsible AI principles that guide our design, development, and use of agentic AI.
The Office of Ethical & Humane Use developed Salesforce's first set of trusted AI principles in 2018. As we entered the era of generative AI, we augmented our trusted AI principles with a set of five guidelines for developing responsible generative AI - principles that also hold true for agentic AI. Now, as we continue to guide the responsible development and deployment of AI agents here at Salesforce, we again reviewed our principles, policies, and products to enable our employees, partners, and customers to use these tools safely and ethically.
Our guiding principles for the responsible development of agentic AI include:
Agentforce ensures that generated content is backed by verifiable data sources, allowing users to cross-check and validate the information. Powered by the Atlas Reasoning Engine, the brain behind Agentforce, it also enables topic classification to set clear guardrails and ensure reliable results.
Agentforce includes built-in toxicity detection mechanisms through the Einstein Trust Layer, a robust set of guardrails that protect the privacy and security of customer data, to flag potentially harmful content before it reaches the end user. This is in addition to default model containment policies and prompt instructions that limit the scope of what an AI agent can and will respond to. For example, an LLM can be guided to prevent the use of gender identity, age, race, sexual orientation, socioeconomic status, and other variables.
Agentforce is designed with standard disclosure patterns baked into AI agents that send outbound content. Agentforce Sales Development Representative and Agentforce Service Agent, for example, clearly disclose when content is AI-generated to ensure transparency with users and recipients or when engaged in conversations with customers and prospects.
Agentforce empowers people to take control of high-risk decisions while automating some routine tasks, ensuring humans and AI work together to leverage respective strengths.
Agentforce leverages a variety of optimized models, including xLAM and xGen-Sales developed by Salesforce Research, which are specifically tailored to each use case. This approach enables high performance with a fraction of the environmental impact.
As Agentforce continues to evolve, we're focused on intentional design and system-level controls that enable humans and AI agents to work together successfully - and responsibly.
By adhering to these principles and guidelines, Salesforce is committed to developing AI agents that are not only powerful and efficient but also ethical and trustworthy. We believe that by focusing on these core principles, we can build AI solutions that our customers can trust and rely on, paving the way for a future where humans and AI drive customer success together.
By adhering to these principles and guidelines, Salesforce is committed to developing AI agents that are not only powerful and efficient but also ethical and trustworthy.