techUK Ltd.

07/30/2024 | News release | Distributed by Public on 07/30/2024 05:44

Event Round-Up: Introducing techUK’s Inaugural Digital Ethics Unconference with Responsible AI Practitioners

30 Jul 2024

Event Round-Up: Introducing techUK's Inaugural Digital Ethics Unconference with Responsible AI Practitioners

Introducing techUK's Inaugural Digital Ethics Unconference with RAI Practitioners

On 30 April 2024, techUK welcomed a group of responsible AI practitioners to our inaugural Digital Ethics Unconference. The unconference format, which gained popularity in the Tech sector in the mid-nineties, represented a move away from formal conferences, giving attendees the freedom to create the event they wanted. This approach fosters a more dynamic and collaborative environment, enabling participants to address the most pressing and relevant issues in real-time. By allowing attendees to shape the agenda, unconferences often lead to more diverse discussions and unexpected insights, particularly valuable in a rapidly evolving field like AI ethics.

In this spirit, the techUK event was participant-driven. Attendees were able to present their suggested topics of discussion for live voting and then self-assign to topics that rose to the top. This free-flowing structure allowed responsible AI practitioners across the ecosystem to engage in peer-to-peer learning, knowledge-sharing, and collaborative problem-solving.

We extend our gratitude to all participants who attended and engaged with this participatory event format. We look forward to continued conversations on Digital Ethics.

Session One: Topics and takeaways

Topics for discussion suggested by participants on the day itself:

  • Who is responsible for responsible AI (12 votes)

  • AI governance platforms, dashboards and their place (6 votes)

  • Responsible AI Skills (4 votes)

  • Generative AI, IP and copyright (2 votes)

  • Effective ethics forums (1 vote)

From the top four voted topics each group was tasked with capturing their key takeaways on posters:

AI governance platforms, dashboards and their place

This group of responsible AI practitioners differentiated existing AI governance SaaS platforms into three categories:,compliance and standards, governance and technical assessments. In each category, the group noted the advantages and disadvantages they are experiencing when engaging with each type of tool/platform.

  1. Regarding compliance and standards, there are several advantages to consider. These include the ability to integrate with existing compliance processes and ML Ops, support for defining AI in governance contexts, and help in managing the rapidly changing landscape, especially for global organizations. Standards can also support interoperability and assist in managing regulatory overlap or conflicts across multiple jurisdictions. However, there are drawbacks to this approach. It may lead to responsible AI becoming a mere tick-box exercise, oversimplifying complex ethical considerations. Value-laden terms like 'fairness' might be reduced to simple compliance metrics, potentially missing the softer impacts of emerging technologies. Additionally, these frameworks may not adequately prompt corporations to consider "what is the right thing to do" beyond compliance, and they might struggle to address highly context-specific risks.

  1. In terms of governance, platforms offer several benefits. They allow for risk management at a portfolio level, providing a view across all projects and systems. They can enable automated aspects of governance, driving operational effectiveness and promoting a pro-governance culture. Users can link training requirements to approved roles and responsibilities, making accountability clear. Unlike oversight in standardized formats, governance platforms can help prevent "lost" or "shadow" AI. However, challenges remain. Determining acceptable levels of risk is subjective, and there's a need to properly account for organizational risk appetite alongside regulatory requirements. This impacts inherent risk impact scoring. Another challenge lies in determining the appropriate rigor for system documentation proportionate to the risk involved.

  1. Technical assessment platforms bring consistency to evaluations and support responsible engineering by standardizing ways to assess fairness, drift, and robustness across an organization. However, they face limitations. The diversity of tests required means that one tool cannot fit all needs. Acceptable measures differ per jurisdiction, particularly for concepts like fairness. There's a risk that automation might lead to missing important ethical risks. Moreover, the outputs are highly dependent on the accuracy and rigor of inputting models and data, raising questions about how to best train and educate people on the right ways to evaluate AI systems.

Who is responsible for responsible AI?

This group of participants spoke about the need for maturity models, assurance teams for methodology and customer joint risk management. The poster suggested there is a lack of oversight, asking should systems and processes be codified and audited? Further, pointing to the need for 'teeth' in corporate ethics culture for actions to be taken by individuals with domain knowledge. This group of participants posed the following questions:

  • How do we measure responsible AI?

  • What is unique to AI's risk profile?

  • Who is marking the practice of ethics in companies, we should not grade our own homework?

  • What is the board's role in supporting responsible AI?

  • Questions of liability: who goes to jail when/if something goes wrong, and harm is caused?

  • Is it truly too soon to regulate AI?

Responsible AI Skills

This group of RAI practitioners presented their dialogue in the format of questions, stating 'it wasn't just questions... but we do have a lot of questions!'. The poster included the following questions which suggest areas of dialogue for responsible AI practitioners to keep considering:

  • How do we achieve cultural change?

  • How do we change business thinking and not just dump responsible AI on technical teams?

  • How can people be empowered to act ethically and cause a fuss when harm is caused by emerging technologies?

  • What role does accreditation play - is it more of a driver?

  • Ethics can seem conceptual - how do we make it concrete and communicate why?

  • How do we develop diverse skills and foster different perspectives?

Generative AI, IP and copyright

This group of RAI pracitioners kept their poster to the point by outlining five things to keep in mind when considering GenAI's impact on IP:

  1. Publicly accessible doesn't equal public domain

  1. Should we protect style?

  1. How much control should we have over expression?

  1. Solutions for 'opting out' works for training

  1. Technical protections exist such as 'poisoning' models

Session Two: Topics and takeaways

Topics for discussion suggested by participants on the day itself:

  • Embedding ethics into culture (7 votes)

  • Bridging the implementation gap by operationalising ethics (4 votes)

  • Lessons learned from AI fails and successes (4 votes)

  • Neuroethics (4 votes)

  • How to facilitate meaningful conversations and participation in digital ethics (3 votes)

  • Synthetic relationships (2 votes)

  • Deepfakes and synthetic media (1 vote)

From the top four voted topics each group was tasked with capturing their key takeaways on posters:

Embedding ethics into culture

This group suggested a few ways that organisations can work to embed ethics in their culture, while also asking pressing questions:

Answers

  • Red teaming and games - give people chances to practice

  • Horizon scanning allows for forward thinking

  • Information literacy

  • Tap into existing teams and be receptive to their needs - champions and representatives

  • Combine policies, initiatives, champions and leadership

  • Be sector specific

  • Enable teams to do their jobs ethically

Questions

  • The role of HR - onboarding psychological safety - what new skills will they need?

  • What incentives do you need to lean on (safety for brand)?

  • How can we train? Is it too risky to put too much responsibility on non-experts?

Bridging the implementation gap by operationalising ethics

This group drew a large vehicle and suggested that AI is currently like a car with an engine that keeps getting larger but has no brakes or wheel to steer. Noting that ethics is not easy to codify as it is socially and culturally dependent. Bridging the gap between proportional mechanisms for risk and having iterative improvements to operationalised ethics through assurance techniques. Finally, operationalising ethics is often about optimising while working through trade-offs.

Lessons learned from AI fails and successes

This group revisited where tech has gone wrong in the past, identified the harms and lessons learned to inform and support best ways forward.

  1. Ethics must be considered at the start - have organisations thought about the potential harms before beginning the development

  1. 1st level causes: immediate oversights ("we forgot", "we rushed"), versus 2nd level causes: systemic issues (no processes in place)

  1. Oversight, due diligence - 'nominated manager; VP sign-off'

  1. Spillover - trying to be ethical but end up causing another problem/harm

Neuroethics

This group of RAI practitioners decided to translate their conversation into three risk categories that should be considered in neuroethics, namely corporate, societal and individual. This is accompanied by real world use cases and regulation for neuroethics.

Corporate risks
  • Monopolies and competition law

  • Dominance of neurolink

  • TESCREAL by Timnit Gebru - transhumanism, extropionism, singularitanionism, cosmism, rationalism, EA and longterminism

  • Right to repair

  • Tech redundancy

Societal risks
  • Dual use and international competition for tech, inequalities between nations regarding access

  • Use of tech to suppress political dissent

  • Thought policing

  • Impact on childhood development

  • Agency and free will

Individual risk
  • Inequality

  • Neurodivergence versus uniformity

  • Responsible science communication

  • Resisting eugenicist narratives

Regulation
  • Global Safeguards (sandbox BCIs), neuro-rights in Chile bill and the Human Rights Act
Use Cases
  • Enhancements of existing conditions

  • Medical use cases

  • Future of neuro-monitoring (data)

  • Advertising

  • War

The inaugural digital ethics unconference proved a great success, with many lessons learned from gathering responsible AI practitioners to engage in thought-provoking conversations and peer-to-peer learning. This format allowed for a breadth of topics to be discussed and showcased emerging priorities in the community.

If you have found this summary of the Digital Ethics Unconference interesting and would like to find out more about techUK's work on digital ethics, and AI Assurance and how to get involved alongside members through the Digital Ethics Working Group and future Digital Ethics Unconferences, please contact Tess Buckley at  [email protected].To join the Digital Ethics Unconference cohost's gatherings please connect with Myrna MacGregor.

Related topics