Blackbaud Inc.

08/14/2024 | News release | Distributed by Public on 08/14/2024 09:37

Artificial Intelligence Policies for Grantmakers: 5 Essentials for Risk Management

Grantmakers are increasingly using artificial intelligence tools such as ChatGPT or Microsoft Copilot to improve productivity and inspire new levels of creativity.

When used responsibly, AI has the power to supercharge your work by helping unearth new approaches to identifying grantees and partners, solving complex problems, and maximizing capacity.

But this immense promise also comes with tremendous risk. As grantmakers look to unleash AI's potential, they are confronting legitimate worries about issues such as privacy, data security and bias. And they are wrestling with existential questions about just how much this emerging technology will change our lives.

While it's difficult to predict how our organizations - and our world - will change in the years ahead as AI expands and evolves, we can work to ensure that our organizations are using AI ethically and that we are taking steps to manage our risks.

With that in mind, a growing number of grantmakers are creating AI policies and guidelines that foster innovation and experimentation while also ensuring their teams are using AI responsibly.

With the right guardrails in place, you can create a culture at your organization that encourages employees to use AI responsibly to optimize their work and broaden your organization's impact.

Understanding AI's Risks

In many ways, the explosion of AI echoes the early days of the Internet in the 1990s and early 2000s and, later, the arrival of social media.

The Internet and social media sparked innovations that were impossible to fully fathom when they first appeared. But they also unleashed widespread disinformation, stoked isolation and fear, and have carried significant risks to our privacy.

Grantmakers have an opportunity-and some would say a responsibility-to ensure they are using AI to amplify their missions and that they are lending their expertise and voice to make sure that AI is harnessed for good.

A critical first step in fulfilling this responsibility is to create rules of the road that ensure that those who work for and are connected to their organizations are fully aware of the potential risks-including the already present risks of perpetuating bias, losing control of their intellectual property and sensitive information, and sabotaging critical relationships.

Provide Context for Your Policies

As you create your AI policy, it is important to ensure your team understands why the policy is important-and to emphasize that the policy is not merely a set of bureaucratic rules and regulations.

Ideally, it is a document that is built with a purpose.

To motivate staff participation, outline the risks your policies help mitigate in a brief statement of purpose.

People may also have different understandings of AI concepts. Ensure a common language and understanding by defining key terms. Here are some of the terms your staff should know:

  • Generative AI: The use of AI to generate new content, such as text or images.
  • Intellectual property (IP): Property that includes creations of the mind, including literary and artistic works.
  • Third-party information: Data collected by an entity that doesn't have a direct relationship with the user.

Highlight Use Cases and Scope

Team members who are new to artificial intelligence may not intuitively know how to use AI tools effectively. With that in mind, your policy may include a section offering examples and ideas on how to use AI at work. This also helps set cultural expectations for how artificial intelligence should be utilized at your organization.

Here are some suggestions:

  • Encourage regular use: Experiment with different tools in your daily work.
  • Frame the purpose: AI tools are assistants-not authorities-that help you streamline your work or brainstorm new ideas.
  • Provide use cases: Include examples of how to utilize tools.

It can also be useful to define scope of use, especially if your organization works with consultants, volunteers or part-time staff. To ensure accountability, clearly define who has access to-and is expected to utilize-your AI tools and policies.

Five Essential Guidelines for AI Use

As more grantmakers adopt AI, they are seeing several common challenges emerge.

These five essential guidelines help address these issues and protect your organization's privacy and integrity.

1. Ensure Accuracy

AI tools source information from different sites across the internet, some of which are not reliable. To ensure accuracy, you should review, fact check and edit AI-generated content before incorporating it in your work.

2. Uphold Intellectual Integrity

Plagiarism is always a risk when using AI to generate content. Before repurposing any subject matter, ensure it is unique by cross-checking with plagiarism detection systems. Some free, useful tools include Grammarly, Plagiarisma and Dupli Checker.

As with any content, it should also reflect your authentic voice and perspective. Be sure to also edit for consistent style and tone.

3. Stay Conscious of Bias

Because people are inherently biased, AI-generated content often is, too. Before publishing, review materials for bias to ensure objectivity. Always avoid using AI-generated content that perpetuates stereotypes or prejudices.

4. Honor Confidentiality

AI tools do not guarantee privacy or data security. When interacting with ChatGPT or similar tools, refrain from sharing sensitive and personal information, such as providing grantee application information for it to draft an award letter. Doing so could risk breaching privacy laws or existing confidentiality agreements. Use it to help draft a template that you can easily update with specific grantee information.

Sensitive data includes but is not limited to:

  • Donor and grantee names and contact information
  • Personal identification numbers and account-related information
  • Financial data
  • HR and recruiting information

5. Solicit feedback regularly.

AI tools are dynamic and quickly evolving. Revisit your policy regularly to ensure it stays relevant. To help refine your policy, team members should also provide regular feedback on their experience with tools.

Host an AI and Policy Training

While an AI policy is critical for most grantmakers, it is important to not simply create and introduce a policy without proper training.

As you introduce your policy, conduct an organization-wide training to ensure everyone knows how to use basic AI tools and understands how to incorporate the policy into their day-to-day work.

During your training, you'll want to set expectations for what AI is and is not-and demonstrate how to use different tools. Consider also providing a list of approved tools for people to easily access and reference.

When reviewing your policy, lead with purpose. Walk people through the ethical and security risks your policy helps mitigate, and why it helps keep your organization aligned with its values and mission. Carefully review your essential guidelines and leave plenty of time for questions and discussion.

Always Keep Evolving

Artificial intelligence is rapidly evolving, with new tools constantly surfacing. Stay attuned to what's new so you can continue to optimize your productivity-and successfully manage security risks.

Smart policies are the cornerstone of effective and safe AI use. Invest in crafting and updating policies that keep your data-and your organization's mission and values-intact. Want to learn more about the risks AI poses and how to craft smart usage policies? Check out our webinar, "AI Policies for Grantmakers: How to Manage Risk and Harness AI for Good."