The University of Melbourne

11/22/2024 | Press release | Archived content

AI cyberbullying detector developed to combat online abuse

The AI Ally development team.

University of Melbourne researchers, in partnership with Girl Geek Academy and funded by the eSafety Commissioner, have developed new artificial intelligence (AI) software, designed to help combat online abuse by empowering victims.

In response to high rates of tech-based gendered violence in Australia, the research team surveyed 230 Australian girls and women aged 14-25 about their experiences using social media and their opinions on AI moderators.

Forty-four per cent of the respondents reported they are regularly subjected to gendered harassment on at least one social media platform and in most cases this abuse took the form of sexist comments.

Many of the respondents expressed that formally reporting online abuse is extremely challenging, as gathering and compiling evidence is arduous.

By harnessing the power of AI, researchers have now invented an online dashboard that can help users monitor their traffic of messages on the social platform Discord.

Dr Eduardo Oliveira, Senior Lecturer in the School of Computing and Information Systems in the Faculty of Engineering and Information Technology, is one of the project leaders.

"Our survey reinforces previous studies showing young girls, women and gender diverse individuals are commonly targeted online. We designed 'AI Ally' to specifically cater to this vulnerable cohort," Dr Oliveira said.

Around 150 million people use Discord each month to communicate via voice, video or text. Existing AI moderation often works by detecting 'toxic' messages and 'punishing' the perpetrator with bans or suspensions. However, researchers say this is a flawed approach.

Project co-leader Dr Lucy Sparrow, who is also based in the School of Computing and Information Systems, said:"Language is so nuanced and often AI moderation can misinterpret human interactions and wrongfully accuse a user of committing online abuse."

Operating on an opt-in basis, AI Ally documents all of the user's conversations on the platform and flags inappropriate or harmful interactions in real-time. The user is then provided with an explanation of why specific comments have been deemed toxic, and a log book can be quickly generated if they wish to file a report with the platform or authorities such as the eSafety Commissioner.

Dr Mahli-Ann Butt, a lecturer in Cultural Studies, said: "Our survey findings revealed there's a lack of practical support tools offered to female gamers. AI Ally aims to fill that gap by reducing the burden on victims when navigating complex reporting processes.

"The whole idea is to offer autonomy to the user and equip them with the knowledge and resources to make an informed decision about how they would like to proceed, based on their own preferences and personal safety."

The AI Ally prototype is in the final stage of development and is scheduled to enter the trial phase early next year, with a Girl Geek Academy hackathon also planned.

The research team will present their findings at the Australian and New Zealand Communication Association conference in November, where they will also discuss the potential for broader application of the technology across various digital platforms.

This project was awarded $243,017 in funding through the eSafety Commissioner's Preventing Tech-based Abuse of Women Grants Program - an Australian Government initiative.