salesforce.com Inc.

09/23/2024 | Press release | Distributed by Public on 09/23/2024 07:06

Tracking Our Progress on the White House Voluntary AI Commitments

Tracking Our Progress on the White House Voluntary AI Commitments

September 23, 20245 min read

In September of 2023, Salesforce and seven other enterprise AI companies signed on to the White House's set of voluntary AI commitments to ensure the safe, secure, and trustworthy development and use of AI technologies.

We are proud to release a white paper that outlines our company's significant progress toward these commitments and reinforces our dedication to designing, developing, and using generative AI with ethics at the core.

Commitment to AI safety, security, and trust

At Salesforce, trust has always been our #1 value. We've spent over a decade investing in ethical AI, creating frameworks that help both our business and our customers innovate safely and securely.

At Salesforce, trust has always been our #1 value. We've spent over a decade investing in ethical AI, creating frameworks that help both our business and our customers innovate safely and securely.

As AI advances, we recognize our responsibility to ensure ethical innovations keep pace, especially given our role as an enterprise AI provider. By prioritizing trust and safeguarding sensitive data, we can pave the way for a trusted AI future.

The World's First LLM Benchmark for CRM.

TRY IT NOW

Advancing the future of trusted enterprise AI

Detailed in the white paper are each of the eight commitments we made as part of the White House agreement, along with our progress toward each. These represent a subset of our trusted AI work overall. Highlights include:

  1. Ensuring internal and external red teaming of models or systems. Our team has conducted 19 internal and two external red teaming exercises, which are simulated attacks within a controlled environment to identify potential vulnerabilities, across our AI models. This has led to product improvements including a 35% reduction in toxic, biased, or unsafe outputs in a marketing feature, and added bias-prevention guardrails for AI agents.
  1. Information sharing among companies and governments regarding the risks and capabilities of AI. In the last year, we have published over 20 public-facing articles that help anyone, including other companies and public sector officials, better understand how to ethically and humanely use AI. These articles range from the top risks and related guidelines for generative AI to how we've built trust into our AI. To help our customers understand the risks and capabilities of AI, we've also created resources and guides like the NIST AI Risk Management Framework quick-start guide and Human at the Helm action pack.
  1. Investing in insider threat safeguards to protect proprietary and unreleased model weights. We have rigorous measures in place for operational, cyber, and physical security, with our AI features undergoing a stringent security and legal approval process. Open-source models are tested on our equipment to ensure compliance and we provide secure storage for API keys, maintain distinct trust zones, and ensure human oversight for high-risk decisions.
  1. Incentivizing third-party discovery and reporting of issues and vulnerabilities. Salesforce has invested more than $18.9 million in our bug bounty program, which leverages ethical hackers to prevent cybersecurity threats across our technology offerings. In addition to bug bounties, issues can be surfaced and reported by third-parties with the help of features built-in to the Einstein Trust Layer, where users can report vulnerabilities through our feedback framework.
  1. Developing and deploying mechanisms that enable users to understand if visual content is AI-generated, including robust provenance or watermarking. Our Responsible AI and Technology Team conducts ethics reviews of our AI products and works with product and engineering teams to mitigate identified risks. In the past year, they have completed over 350 reviews, which resulted in over 100 trust patterns - standard guardrails to improve safety, accuracy, and trust - now built into our AI features.
  1. Publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias. Salesforce provides public documentation on product risks and mitigations, as outlined in our public policy efforts. We also publish model cards, similar to nutrition labels, that provide critical information about how our models work, for a number of our predictive AI models to disclose the risks of the technology, along with the data it was trained on, ethical considerations, use cases, and more.
  1. Prioritizing research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy. Salesforce's AI Research & Insights team released research detailing the importance of a human touch in AI, and our Office of Ethical and Humane Use has funded user research headcount to focus on trust and responsible AI. Additionally, the Salesforce AI Research team has published research on trust and safety metrics in LLMs and have open-sourced a safety library, AuditNLG.
  1. Developing and deploying frontier AI systems to help address society's greatest challenges. In 2023, we released our Blueprint for Sustainable AI, outlining our strategy to minimize the environmental impact of AI, along with the Salesforce Accelerator - AI for Impact, which supports purpose-driven nonprofits in developing AI-powered climate solutions. These initiatives led to the launch of our Sustainable AI Policy Principles, a framework designed to guide legislation for AI development to help reduce environmental impact and drive climate innovation, exemplifying Salesforce's commitment to impactful climate solutions.

Unlock More Insights on Trust in AI.

CHECK OUT THE SALESFORCE STAT LIBRARY

We're proud of the progress we have made in developing a trusted AI experience for all stakeholders, but our work is not done. We will continue to share updates as we progress and collaborate with customers, industry partners, government leaders, and civil society groups worldwide to develop AI products with ethics at the forefront.

Go deeper: