CSIS - Center for Strategic and International Studies Inc.

16/07/2024 | Press release | Distributed by Public on 17/07/2024 03:20

A Russian Bot Farm Used AI to Lie to Americans. What Now

A Russian Bot Farm Used AI to Lie to Americans. What Now?

Photo: fifg via Adobe Stock

Commentary by Emily Harding

Published July 16, 2024

"Farming is a beloved pastime for millions of Russians."

-RT press office, in response to allegations that RT created an AI-enabled bot farm to spread disinformation

Russia has officially made one dystopian prediction about artificial intelligence (AI) come true: it used AI to lie better, faster, and more believably. Last week, the U.S. Department of Justice, along with counterparts in Canada and the Netherlands, disrupted a Russian bot farm that was spreading pro-Russian propaganda. The FBI director and deputy attorney general in a press release highlighted the use of AI to create the bot farm as a disturbing new development. What they did not say, however, is that the West is unprepared to defend itself against this new threat.

This capability enables quick reactions on a huge scale to highly divisive world events. For example, the Russian operation could choose to spread divisive messages about the assassination attempt on former president Trump. In the past, this would have been a labor-intensive task of crafting a variety of credible messages designed to outrage both ends of the political spectrum, then iterating until a divisive note hit a nerve. Now, AI can craft the message, alter it for different audiences, and distribute it rapidly. Russia could enter the chat almost immediately.

Russia has always been at the leading edge of innovation in propaganda. From spreading allegations that CIA created AIDS in the 1980s and amplifying divisions over race and religion in U.S. society in 2016 to blasting anti-Ukraine messages globally today, Russia constantly finds new ways to push narratives to weaken the West.

AI has now provided the capability to vastly scale up those propaganda efforts. Throughout this disinformation campaign, Russia employed AI to create over 1,000 fake American profiles on social media, then used those profiles to spread anti-Ukraine, pro-Russian narratives in the United States. In short, Russia has coopted AI to lie.

This effort began in earnest around the invasion of Ukraine. A deputy editor in chief at RT, a Kremlin-linked news outlet that has earned a reputation for spreading the Kremlin's distorted view of the world, organized the development of the bot farm. RT worked in conjunction with agents of the Federal Security Service (FSB)-perhaps Russia's most aggressive and nuanced propagators of propaganda. In April 2022, the FSB bought the infrastructure for the farm, including U.S.-based domain names. Those domains hosted the AI-powered bots and even included code that tricked X (formerly Twitter) into believing the bots were real humans. Then, in early 2023, a Russian FSB officer created a private intelligence organizationto manage the bot farm, using employees at RT.

The farm used Meliorator-"a covert artificial intelligence (AI) enhanced software package," according to a joint statement issued by FBI and U.S. allies-to create a multitude of online personas. An open-source tool called Faker generated photos and limited biographical information for those personas. One form of bot was carefully architected to appear quite real. Developers used a web crawler to create seemingly authentic personas, which were used to amplify disinformation shared by other accounts. The personas represented a number of nationalities; many were posing as Americans.

The bots largely posted on X, but the code was clearly written to cross platforms and national boundaries. The government advisory said that "analysis of Meliorator indicated the developers intended to expand its functionality to other social media platforms." The project disseminated disinformation to and about a number of countries, including Germany, Israel, the Netherlands, Poland, Spain, Ukraine, and the United States. Allied governments report that the tool is capable of creating convincing personas in large numbers, using those personas to post credible-sounding information, amplifying messages from other bot personas, and formulating their own messages tailored to the apparent interests of the fake human.

This development was predictable, but the United States still was unprepared to confront it. It was predictable in that it sits at the confluence of two events: First, Russia is fighting a war against Ukraine that has featured both kinetic and information combat. Second, it is a U.S. election year, when Russia has in the recent past significantly stepped up its propaganda efforts. Thus, there was every reason to expect Russia would use 2024 to press the cutting edge of propaganda technology.

Despite this advanced warning, U.S. efforts to defend against disinformation campaigns remain anemic at best. The Global Engagement Center at the State Department and the Foreign Malign Influence Center at the Office of the Director of National Intelligence are small and understaffed. The rules about what U.S. government (USG) agencies are and are not allowed to do in the information space are unclear and sometimes contradictory. In truth, the USG is largely dependent on industry to keep the bot farms away, and even the USG's ability to talk to social media companies about these issues was recently the subject of intense legal debate. The sum total of these efforts is that the United States is crawling, and its adversaries just strapped on a jet pack.

Creating highly tailored propaganda is now fast and easy. Russia proved that AI can create realistic-seeming personas, drive content at scale, and trick platforms into believing personas are not bots at all. A group of allies caught this effort and seized the relevant domains, but not until the work had been underway for two years. In another two years, the state of the art in AI will be such that a bot can identify the messages that resonate best with a micropopulation and then feed that population what they want to hear. The payload of information will feel as local and genuine as a conversation over the fence with a neighbor.

Where Russia has led, others will follow. Western allies are sure to see many other attempts by intelligence services, information brokers, and even private citizens to use AI to spread disinformation, often to weaken the resolve of those who would stand up to autocrats and bullies.

Defensive efforts should move faster: Social media companies should capitalize on AI to automatically identify anomalous behavior, and they need teams of humans to investigate what the AI systems flag. The United States and allied governments should support research into how to use AI to defend against AI. Meanwhile, Congress should provide more freedom and more tools to the parts of the government that are fighting propaganda, in particular supporting efforts to inoculate and educate the public about how to avoid getting duped. Only a highly skeptical population, reading anything they see on social media critically, can starve the bot farmers who seek to divide them.

Emily Harding is the director of the Intelligence, National Security, and Technology Program and deputy director of the International Security Program at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2024 by the Center for Strategic and International Studies. All rights reserved.

Image
Director, Intelligence, National Security, and Technology Program and Deputy Director, International Security Program