UNESCO - United Nations Educational, Scientific and Cultural Organization

07/10/2024 | News release | Archived content

Making AI more open could accelerate research and tech transfer

Open science and AI could be a powerful combination

During the June 6 event, the speakers discussed how AI and open science can each accelerate scientific progress. For Anna L. Nsubuga, UK Ambassador to UNESCO, open science has the potential to "drive economic growth, tackle shared global challenges and promote rigour and integrity in the global research system".

At the same time, she added that "our collective thriving research ecosystems generate significant volumes of data". AI could be used to "unlock endless possibilities", such as "improved medical diagnostics, enhanced drug discovery and the design of novel materials with unique properties".

AI has already been instrumental in achieving scientific milestones, such as predicting protein structures and facilitating fully autonomous research. It can play a transformative role at many stages of the scientific process, from the design of experiments to the analysis of large sets of data - data that are often made accessible because of open science.

The intersection of open science and AI could enable tech transfer and the "emergence of new avenues of research", observed Dr Laura-Joy Boulos, Associate Professor at Saint Joseph University of Beirut and 2020 L'Oréal-UNESCO International Rising Talent.The combination could also provide "access to information that is across disciplines, across regions and across languages", added Dr Dureen Samander Eweis, Science Officer at the Centre for Science Futures of the International Science Council.

For its part, UNESCO is contributing to the development of responsible AI-driven research through its Abdus Salam International Centre of Theoretical Physics (ICTP). The ICTP is a founding member of the Global AI Alliance, a group of developers and researchers from international research institutes and high-tech companies spearheaded by IBM and Meta that have been working together since December to accelerate the adoption of open AI.

0n 27 May, the ICTP and IBM Research Europe and Africa announced the launch of a prize which will be awarded for the first time next yearto scientists or research teams having made a major contribution to theory, algorithms or applications related to an open approach to AI.

When UNESCO member states unanimously adopted the UNESCO Recommendation on Open Sciencein November 2021, they had fresh evidence before them of the effectiveness of an open approach to science: over the preceding 20 months, they had witnessed how the development of a life-saving vaccine had been accelerated by the open sharing of the virus's genome and other research findings.

Thanks to this Recommendation, we now have both a globally agreed definition and clear principles for open science. UNESCO also published open data guidelines in 2023.

Moreover, UNESCO programmes are practicing what they preach. For instance, in 2017, UNESCO's Intergovernmental Hydrological Programme launched the Water Information Network System (WINS), an open-access, participatory platform for sharing, accessing and visualizing water-related information, as well as for connecting water stakeholders. Based on a tool equipped with a geographical information system, WINS allows people to store, access and create tailored maps on water at all levels.

Lack of transparency in AI may threaten the credibility of AI-driven science

Despite its promise, AI presents obstacles both to open science and to the replicability, equity and trustworthiness of AI-driven scientific innovation. For example, the fact that the development of artificial intelligence is dominated by private companies means that scientists tend to be incentivised against following open science practices like that of openly publishing how the algorithms work and the training data they used.

As Denisse Albornoz, Senior Policy Adviser in the Royal Society's Data and Digital Technologies team, put it, "deep learning is opaque by nature. If, on top of that these models are proprietary, we cannot evaluate them or scrutinise them and understand, for example, how representative they are".

Misleading outcomes can arise from "incomplete, incorrect or unrepresentative datasets", added Professor Alison Noble, Technikos Professor of Biomedical Engineering at the University of Oxford and chair of the Royal Society's Science in the Age of AI Working Group, "posing potential harm in high stake fields like medicine".

According to Ms Albornoz, this could also lead to "the private sector shaping and defining the research agenda" and also "creating dependencies on the infrastructure" they provide.

Worse, as Prof. Noble made clear, "the lack of transparency also creates challenges for reproducibility, one of the key characteristics of trusted research".

These challenges are set to have a much greater impact as science becomes increasingly reliant on AI. To tackle them, Ms Nsubuga suggested that "we must insist that AI-based research meet open science principles and practices".

Incentives can help to mainstream open AI

Prof. Noble emphasised the need for incentives to promote open science practices in AI-driven research. Public-private and cross-disciplinary collaboration could help with this, while also enhancing the quality of research and creating more accurate models.

When collaborating with private actors, Laura-Joy Boulos recommended focusing on venture capitalists, due to their flexibility, and mentioned that obtaining high-quality data "is something that researchers can do better than industrials and something we should capitalise on. The research community could be a partner that supplies good data", she suggested and, in return, the private sector could follow the research community's guidelines.

Albornoz spoke of the need to make sure that research funders "lower the pressure to make everything AI-ready and everything AI-specific", since not all science benefits from AI.

For her part, Noble advocated developing AI models that "require less energy" to address the environmental impact of AI tools.

No need to trust AI blindly

To help different communities understand better, benefit from and trust AI-driven science without having to "trust it blindly", Ms Albornoz recommended making AI more explainable. Dr Boulos took up this idea by suggesting that one invest in projects "that can build explanation interfaces" for AI tools so that it is "not just experts talking to the AI but actually users".

As Ms Albornoz said, "meaningful access to essential AI infrastructure" also means "developing the right skills" and even "creating opportunities to make sure that diverse scientific communities and diverse perspectives can influence research agendas". In this way, they could become "co-designers rather than passive users", Prof. Noble added.

For her part, Dr Eweis stressed the importance of "supporting countries' scientific communities in implementing existing frameworks and guidelines" such as the UNESCO Recommendations on Open Science and on the Ethics of Artificial Intelligence.

Given the fast pace at which AI is developing, Dr Boulos suggested that UNESCO continue to provide the "space and the pace" for reflection and analysis. "We need to be able to respond fast when we talk about AI safety when anything new comes out, to be able to provide advice," she said, "so that everybody understands some of the issues but also responds together".

UNESCO's next step is to continue working with the Royal Society to assemble experts and develop a practical factsheet on this topic.