salesforce.com Inc.

10/28/2024 | Press release | Distributed by Public on 10/28/2024 16:04

How Salesforce Shapes Ethical AI Standards in the Agent Era

How Salesforce Shapes Ethical AI Standards in the Agent Era

The era of agentic AI is officially here with Agentforce. The rise of AI over the past few years has been extraordinary, and at this important inflection point, it is critical to consider the impacts of emerging technologies on the humans who are using them.

As we build tools that delegate routine tasks to autonomous AI agents, allowing people to lean in on more high-risk, high-judgment decisions, trust must take center stage. At Salesforce, we're prioritizing building trust between humans and agents with responsible AI principles that guide our design, development, and use of agentic AI.

At Salesforce, we're prioritizing building trust between humans and agents with responsible AI principles that guide our design, development, and use of agentic AI.

Guiding principles for responsible agentic AI

The Office of Ethical & Humane Use developed Salesforce's first set of trusted AI principles in 2018. As we entered the era of generative AI, we augmented our trusted AI principles with a set of five guidelines for developing responsible generative AI - principles that also hold true for agentic AI. Now, as we continue to guide the responsible development and deployment of AI agents here at Salesforce, we again reviewed our principles, policies, and products to enable our employees, partners, and customers to use these tools safely and ethically.

Our guiding principles for the responsible development of agentic AI include:

  • Accuracy: Agents should prioritize accurate results. We must develop them with thoughtful constraints like topic classification, a process where user inputs are mapped to topics that contain a relevant set of instructions, business policies, and actions to fulfill that request. This provides clear instructions on what actions the agent can and can't take on behalf of a human. And, if there is uncertainty about the accuracy of its response, the agent should enable users to validate these responses whether through citations, explainability, or other means.

Agentforce ensures that generated content is backed by verifiable data sources, allowing users to cross-check and validate the information. Powered by the Atlas Reasoning Engine, the brain behind Agentforce, it also enables topic classification to set clear guardrails and ensure reliable results.

  • Safety: We must mitigate bias, toxicity, and harmful outputs by conducting bias, explainability, and robustness assessments, and ethical red teaming. Agent responses and actions should also prioritize privacy protection for any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm.

Agentforce includes built-in toxicity detection mechanisms through the Einstein Trust Layer, a robust set of guardrails that protect the privacy and security of customer data, to flag potentially harmful content before it reaches the end user. This is in addition to default model containment policies and prompt instructions that limit the scope of what an AI agent can and will respond to. For example, an LLM can be guided to prevent the use of gender identity, age, race, sexual orientation, socioeconomic status, and other variables.

  • Honesty: When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., a disclaimer in a chatbot response to a consumer, or use of watermarks on an AI-generated image).

Agentforce is designed with standard disclosure patterns baked into AI agents that send outbound content. Agentforce Sales Development Representative and Agentforce Service Agent, for example, clearly disclose when content is AI-generated to ensure transparency with users and recipients or when engaged in conversations with customers and prospects.

  • Empowerment: In order to "supercharge" human capabilities, we need to prioritize the human-AI partnership and design meaningful and effective hand-offs. In some cases it is best to fully automate processes, but in other cases AI should play a supporting role to the human - especially where human judgment is required.

Agentforce empowers people to take control of high-risk decisions while automating some routine tasks, ensuring humans and AI work together to leverage respective strengths.

  • Sustainability: Model developers should focus on creating right-sized models where possible to reduce their carbon footprint. When it comes to AI models, larger doesn't always mean better: In some instances, smaller, better-trained models outperform larger, general-purpose models. Additionally, efficient hardware and low-carbon data centers can further reduce environmental impact.

Agentforce leverages a variety of optimized models, including xLAM and xGen-Sales developed by Salesforce Research, which are specifically tailored to each use case. This approach enables high performance with a fraction of the environmental impact.

The path forward

As Agentforce continues to evolve, we're focused on intentional design and system-level controls that enable humans and AI agents to work together successfully - and responsibly.

By adhering to these principles and guidelines, Salesforce is committed to developing AI agents that are not only powerful and efficient but also ethical and trustworthy. We believe that by focusing on these core principles, we can build AI solutions that our customers can trust and rely on, paving the way for a future where humans and AI drive customer success together.

By adhering to these principles and guidelines, Salesforce is committed to developing AI agents that are not only powerful and efficient but also ethical and trustworthy.

Go deeper:

  • Learn more about Agentforce
  • Learn more about how Salesforce develops trustworthy AI agents
Paula GoldmanChief Ethical and Humane Use Officer, Salesforce