11/05/2024 | News release | Distributed by Public on 11/05/2024 14:36
Hi, everyone-we're back with part two of our Q&A with Anthology's Compliance, Trustworthy AI, and Privacy Officer Stephan Geering! (If you missed part one, check it out first.) In this post, Stephan dives a little deeper into the ethical use of AI in higher education, how institutions can protect their data, and what's important to consider when choosing a vendor for AI tools.
Governance: How should institutions govern and guide faculties on the use of AI for various purposes (e.g., teaching and research) to prevent unintended consequences and inconsistent practices?
We talked a bit about governance in the first blog post, but let's dig a little deeper. I know from conversations with our customers that governing large and often decentralized institutions is a challenge. What seems to work well for many organizations is to leverage existing processes and governance structures that are already successful. For example, institutions can build on existing risk management, privacy, or security governance structures and review processes to ensure consistency and coverage across faculties. But it's important to combine this with the inclusion of diverse and multi-disciplinary input from across the institution.
You therefore want to use a combination of bottom-up and top-down approaches. The bottom-up approach will help to consistently receive feedback from across the institution, so the AI framework and documents can be adapted as needed. And the top-down approach helps with consistent communication, implementation, and enforcement of the institution policies and guidance. In the AI Policy Framework we developed as a resource for our customers, we provide more detailed recommendations.
Partnering with trustworthy vendors: When partnering with third-party AI vendors or existing vendors that start using AI, what steps should institutions follow to ensure their vendors meet high standards of ethical AI?
A large part of AI risk management is vendor risk management. The reality is that sophisticated AI applications are difficult to build. Many organizations therefore rely on vendors. This may present challenges such as limited influence on the functionalities, but it also can be advantageous, as many vendors not only have better technical expertise but also mature security and responsible AI programs thanks to their scale.
Institutions should build on their existing procurement and vendor due diligence processes. This will allow them to include the necessary questions in their due diligence questionnaire (e.g., Does the vendor use institution data to train their model? Can the institution opt-in/out of features? Does the vendor have a responsible AI program?). Vendors should have answers to those questions and be able to help customers with detailed documentation.
Data privacy and security: What measures should institutions take to safeguard personal information and other confidential information such as research data?
It's important that institutions are realistic. Their staff and students will use generative AI, both to help them with their courses and with their research work. Clear policies and guidelines on how to safely use generative AI tools are important but not enough. Institutions should make sure they also give staff and students access to enterprise versions of AI tools. These generally include better privacy and responsible AI commitments than personal AI tools. Generally, enterprise versions will not use institution data to train AI models and will have robust privacy and security measures. The IT and security teams can help as well. Some security tools now include the ability to monitor the use and, where necessary, block access to external generative AI tools. Institutions should leverage these capabilities to understand how external AI tools are being used, so they can adapt their approach to their policy enforcement and awareness.
Thanks so much to Stephan Geering for taking the time to share his thoughts on these important points. For more on the ethical use of AI in higher education, be sure to catch us on the road with the Ethical AI in Action World Tour, taking place in major cities across the globe this October, November, and December.