Bowdoin College

07/03/2024 | News release | Distributed by Public on 07/04/2024 03:03

Why Studying the Humanities Is Essential for Designing Artificial Intelligence Systems

Module Three, The Social Sphere
In this part of the course, Professor of Government Michael Franz looked at the regulation of AI as well as the effects the technology can have on citizens.

On the political front, says Franz, there has been much more progress in Europe when it comes to regulation. Recently, the European Parliament passed the Artificial Intelligence (AI) Act, the first-ever legal framework for dealing with AI, providing for EU-wide rules concerning data quality, transparency, and accountability.

In the US, however, it's a different picture, explains Franz: "We're a fairly polarized country that doesn't do any legislating, so there hasn't been much policymaking on this at a federal level." There has been some action at the state level, he adds, where a number of states, including California and Connecticut, have enacted legislation designed to protect people from the negative effects of AI systems. The lack of progress at the national level prompted President Biden to issue an executive order in October 2023 to try to ensure that AI systems are safe and secure.

Franz's class also looks at how AI impacts citizens, particularly with regard to the "news" that is fed to us on social media. "With more people, especially the younger population, relying on social media channels to get their news, they are increasingly exposed to what is being fed them by engagement algorithms," he explains. "We look at how manipulation of social media feeds changes the way we are exposed to certain pieces of information and ask how this might be regulated."

Module Four, The Developers' Sphere
Something that developers of AI systems have to be aware of is that they are building something that possesses a level of autonomy and will, in a way, go on to have a life of its own, explains Assistant Professor of Digital and Computational Studies Fernando Nascimento, who has a background in philosophy as well as computer science.

This can lead to what he calls a "misalignment problem," something that formed a central part of class discussion in this part of the course. "AI develops its own models so has more agency than other systems. Therefore, their societal and ethical implications are much broader than previous digital artifacts," explains Nascimento, who along with Chown coauthored Meaningful Technologies: How Digital Metaphors Change the Way We Think and Live (University of Michigan Press, 2023).

"For example, in 2018, Facebook put together a new AI algorithm to select the posts we see in our feeds. The company said the goal was to maximize meaningful relationships, so you would see information that matters to you, from your family, friends, and colleagues." The problem, he adds, is that the AI algorithm modeled "meaningful" according to the reactions and comments of the post. So "meaningful" was unintentionally translated to "emotional," and what stronger emotions are there than fear and hate? "So, instead of promoting harmonious relationships among friends and loved ones, the AI algorithm created polarization because hateful or provocative posts are more likely to get a reaction, get reposted, and promote more traffic. And to make things even more complicated, on top of possible technical misalignments, one has also to consider the alignment of big technology incentives with broader societal goals."

This, says Nascimento, is just one example of why the role of the software developer has added meaning and importance in the age of AI. "The technical side of AI is just one piece of the puzzle. The problem also has to be tackled from the liberal arts perspective," he explains, "involving several layers of society and considering the impact this technology will have on everyone from a diverse and humanistic perspective."