Telefónica SA

10/01/2024 | News release | Distributed by Public on 10/01/2024 10:16

Does AI have a gender bias

I was part of the WomenWithTech event organised by Telefónica Tech where several of us participated by talking about our experiences. Right during the event, a recurring theme came up in this type of forum: does AI have a gender bias?

Almost in unison, those of us who were gathered there said yes with our heads… It is not something that is intuited, it is something that, in fact, has its logic: if AI feeds on a huge amount of data generated by human beings who experience gender bias (since our society is still biased at many levels) then, evidently, AI will also drag that bias if safeguards are not put in place to avoid it.

The Unesco report warning about gender bias in artificial language models. In this regard, and coinciding with the celebration of 8M then, Unesco (United Nations Educational, Scientific and Cultural Organisation) published a very interesting report on biases related to women and girls in AI: 'Challenging systematic prejudices: an investigation into bias against women and girls in large language models', accessible to the general public 'Open Access' under a Creative Commons licence.

Specifically, this report by Unesco's International Centre for Artificial Intelligence Research examines biases in three well-known Large Language Models (LLM):

  • OpenAI's former GPT-2 (Generative Pre-trained Transformer 2).
  • LLaMa 2 (Large Language Model Meta AI 2) from Meta, the Facebook matrix.
  • ChatGPT, also from OpenAI, which is a tuned chatbot (based on GPT-3 and later versions) and the only one with reinforced learning from conversational feedback known by its acronym RLHF (Reinforced Learning from Human Feedback).

Artificial language models reflect and enhance the gender biases of our society.

Artificial language models

The publication analyses, based on a series of studies conducted through conversational interaction in English with these models, how they reflect, and even enhance, gender and cultural biases, and how they affect different groups of people, especially women and minorities. Focusing on the gender biases identified, the report reveals that these models:

  • They tend to associate women with domestic and stereotypical roles ('home', 'family', 'children'), and men with professional and leadership roles ('business', 'executive', 'salary').
  • They often generate sexist and derogatory language about women, especially in dialogue settings ('Women were seen as sex objects and baby machines' and 'Women were seen as the property of their husbands').
  • They can reinforce harmful stereotypes and prejudices about different social groups, helping to create a distorted view of reality.

The report therefore highlights the ongoing presence of gender and social biases in even the most advanced AI models, underlining the need for more research and intervention to ensure equity.

A personal experiment with Generative AI

To complement these findings, I decided to test the behaviour of Generative AI for myself. For this purpose, I used Dall-E 3 (OpenIA's image creation model) through Microsoft Designer to validate the proposed images based on different prompts.

In view of the results, and even using neutral nouns such as 'person', if we accompany them with qualifiers such as 'very important' or 'successful', they make the model iterate to male-dominated results. And, as the report warned, there is evidence of a gender-role biased association that is further intensified by the addition of words that make the subject more relevant, causing the model to identify the male gender as more appropriate for certain activities.

Although this experiment is still a very limited sample of tests, it helps me to take a more critical view of the results generated by AI by being more aware of the stereotypes perpetuated by the models so as not to amplify them even more during use, as the UNESCO publication also points out.

AI reflects the views, values and biases of those who create it and feed it with data.

All this emphasises the need for more research to address the ethical and social implications of bias in LLM, accompanied by actions to ensure fairness not only in the development of AI but also in its use.

Possible actions to mitigate gender bias in LLM

Part of this action plan is discussed in the report by assessing the root of the problem in the algorithms, and by proposing mitigations to existing biases, such as data cleaning, pre-training, fine-tuning and post-processing.

Thus, from a technical point of view, the possible actions are grouped into the following blocks:

Firstly, the input data have to cater for the diversity of the society in which we live: for this purpose, data cleaning or pre-training with inclusive datasets can be chosen to complement and correct pre-existing biases.

Transparency in the design and selection of logic, coupled with accountability in the implementation of algorithms: it is clear that the programming of the model itself will be biased by the person who develops it. There is a need to diversify recruitment (on current data, female professionals are still under-represented in technology companies).

It is vital to understand that if AI systems are not developed by diverse teams at all levels, the results will not represent the whole population and will not meet the needs of diverse users.

At the technical level, data cleaning and evaluation of algorithms are examples to mitigate pre-existing biases, without forgetting the importance of gender awareness for a better use of AI.

Undoubtedly, and reiterating what was said above, these actions that focus on 'rectifying' the technology itself, must go further to focus, increasingly, on the use that is made of it. Hence the need to promote education and awareness of AI from a gender perspective: through collaborative platforms, discussion forums and initiatives that empower more women and girls in this field, such as the one I mentioned at the beginning of this article.

Hopefully we can ride this wave of generative AI so that AI acts as a catalyst to make this digital and social revolution truly equitable