New America Foundation

11/04/2024 | News release | Distributed by Public on 11/04/2024 15:31

AI, Sovereignty, and Global Inequality

Nov. 4, 2024

Overview

The Planetary Politics Initiative at New America hosted a policy salon dinner on artificial intelligence (AI) sovereignty and global inequality on September 24, 2024. With over two dozen subject-matter experts in attendance, the discussion built on the recent passage of the Global Digital Compact (GDC) at the 79th Session of the United Nations General Assembly. Member states found consensus in acknowledging the need to bridge the digital divide and reshape thinking around AI governance and sovereignty. The dinner brought together thought leaders from around the world, specializing in different aspects of AI, and exemplified the GDC's theme of forming new partnerships across sectors in both the Global Majority and the Global Minority.

Attendees repeatedly highlighted the gaps between advanced economies, where tech capacity is highly concentrated, and developing nations striving to keep up in the emerging AI arms race. They also emphasized the urgency of addressing risks in emerging AI technologies through a representative, multi-stakeholder approach to governance. There was strong consensus on the need to establish a trustworthy, inclusive, and needs-based AI ecosystem. The discussion was held under Chatham House Rules, meaning that the information in this summary can be used, but speakers are not identified.

Key Takeaways

  1. Incorporate the Majority World perspective: Start with inclusive governance to enable people to understand what is being governed.
  2. Think beyond AI governance: Build governance models that focus not only on regulation but pro-social growth and change.
  3. Financing in the right direction: Collaborate with companies, governments, and civil society to ensure that financial resources are directed appropriately.
  4. Needs-based governance: Adopt a needs-based approach, starting from grassroots requirements, and building from there.

Discussion

The dinner began with general observations about the lead-up to the signing of the GDC. Attendees agreed that after decades of living in what Ian Bremmer has called a "technopolar moment," where Big Tech dominates the global order, recent years have seen efforts to push back and expand the number of stakeholders. Calls for digital sovereignty are one manifestation of this pushback, though concerns persist that this vision may be too state-centric, raising critical questions about representation, multi-stakeholder cooperation, and the future of an open internet. Against this backdrop, attendees felt that the GDC underscores the right principles: making AI trustworthy, inclusive, and sustainable.

To implement these principles, states need foundational elements such as mechanisms to influence global governance decisions, stable public institutions, and civil society involvement. Other factors, such as digital connectivity, access to electricity, and basic infrastructure, also influence states' ability to participate in global governance. These elements, in turn, are tied to states' overall governance and developmental capacity.

Four key themes emerged:

  1. Incorporating the Majority World Perspective
    "Who makes the rules?" This fundamental question-raised by one speaker-prompted attendees to consider the asymmetry in AI investment, development, and governance. As noted in a recent New America-Igarapé Institute policy brief, there are at least 640 AI principles across 60 countries, almost all located in the Global Minority. This means a small minority of the world is writing and applying rules for the vast majority of users. One vivid example of this asymmetry is the distinction between concerns over misuse (focused on risks, accountability, and capacity development in the Minority World) and missed use (the Majority World's concern that restrictions will impede their ability to harness AI for development). However, a growing view suggests this dichotomy is false, as citizens in the Global South also value fundamental rights like privacy.

    Every attendee agreed that the Majority World perspective needs more priority. Several stressed that this issue goes beyond a Global North-Global South divide; in wealthier countries, technical resources, knowledge, and money are more abundant. Trust is vital for any multi-stakeholder governance structure, but many Majority World nations reject or are skeptical of the interventions of international institutions and global governing bodies due to poor communication, unmet commitments, and historical patterns of colonialism. As a result, many of these nations are focusing on developing local models and infrastructure while pushing for various forms of digital sovereignty.
  2. Thinking Beyond AI Governance
    Participants noted that AI governance must extend beyond purely digital ecosystems. Countries should design and assess AI governance frameworks with broader socioeconomic goals in mind, such as reducing inequality and improving education and health. Rather than using a narrow tech-focused lens, policymakers could consider perspectives based on youth, gender, or civil society. Some attendees stressed the importance of involving civil society in AI governance, as most people are unaware of their digital rights. Civil society can help bridge this knowledge gap. Additionally, policymakers need capacity building to design and implement governance structures, and sovereign nations have the right to be part of these conversations. The discussion emphasized that policymakers must listen to a wider range of voices, as citizens' priorities may not directly relate to AI.

    Attendees also suggested developing a global taxonomy of risks, sensitive to geographic, cultural, and social perceptions. Moreover, they highlighted the need for a common language to demystify AI and its safety concerns.
  3. Financing in the Right Direction
    The conversation around investments raised key questions:
    How can countries be encouraged to open their models to the world, given the rapid market growth and the strategic potential of AI?
    Will the cost of computation rise or fall, and how will that affect marginalized nations?
    How can we incentivize innovations that lower access costs?
  4. A Needs-Based Approach
    There was broad agreement on the importance of a needs-based approach. Countries must determine what data they need, then build. Many nations do not require high-performance models for their immediate challenges. Africa was cited as a region at risk of "over capacity" because organizations are not aligning with grassroots needs.
    Attendees emphasized that funding systems often neglect grassroots priorities. The next step should be to determine what tools countries genuinely need, pool human capital, and build an AI ecosystem based on those needs.

Areas for Further Inquiry

  • How can we better define governance, ensuring that North, South, East, and West are included in shaping frameworks?
  • What exactly are we governing? We need to assess what risks we are mitigating and what benefits we are trying to maximize.
  • How will the cost of compute change over time with new players in the market, and how do we incentivize affordable access?
  • What do we mean by AI? Predictive and generative AI approaches present different governance, funding, and socioeconomic challenges.
  • Data remains a key issue. What data is being used to build models? Where is it coming from, whose views does it represent, and what biases does it contain? How can we collect, use, and store data ethically?
  • What is the role of open models in promoting Sustainable Development Goals? Although open source doesn't mean free, are open models the least harmful option for smaller, less-resourced nations?
  • How do we broaden the conversation to involve local actors and stakeholders? How can we boost public literacy and participation in AI governance?