Stevens Institute of Technology

10/09/2024 | News release | Distributed by Public on 10/09/2024 09:55

Applying Fairness to Artificial Intelligence

Research & Innovation

Applying Fairness to Artificial Intelligence

posted

Assistant professor Violet Chen's work explores the real-world consequences of AI-influenced decision-making

Much of the discussion on artificial intelligence and machine learning centers around improving efficiency and performance, but Stevens School of Business assistant professor Violet Chen's work asks another question. How does AI account for fairness in the decision-making process?

Chen, who teaches courses in the business intelligence & analytics and information systems curriculum, studies optimization modeling methods and algorithms to understand how artificial intelligence affects fundamental questions about fairness, what it means to pursue fairness and what practical challenges it presents.

"Fairness is essentially trying to provide comparable access to resources and benefits to people or stakeholders of different backgrounds," Chen explains. "Whenever we try to allocate some type of resources, we want to make sure that each person can benefit from the resource in certain ways and are more or less equally happy with the benefits they gain. If we think about it in a mathematical sense, a bunch of functions and measures are defined to reflect this concept we call fairness in decisions."

Chen's passion for exploring fairness and equity in AI goes hand-in-hand with the declaration of the first National Women in AI Month, a joint effort by National Day Calendar and the Cadence Giving Foundation to "showcase women as role models, promote their success in the AI and tech sectors, and encourage women to pursue careers in AI technology."

How are AI and equity related?

Fairness is a very classical topic. People have studied it and cared about it for a long time. It's connected to AI because we are using more of these modeling and algorithm tools to support decision-making. As we use these modern tools, we have to think about designing them in a fair way or whether the impacts of these tools will be distributed fairly across different people.

Why is it important to understand how AI affects fairness?

When we design optimization models, the conventional goal of the model is still mainly on the efficiency side. For example, we try to minimize the cost, or we try to maximize the profit. But what about when these models are applied in applications that have direct social impact? One example is the distribution of medical resources. When you distribute medical resources, it's not just the cost or the profit that matters. It's more about whether all people benefit equitably from these resources.

How has the rapid development of this technology, including generative AI, impacted your research?

Some of these algorithms are known as "black boxes," meaning we don't really understand how they work, but we know they work well in certain cases. But depending on how you think about these algorithms, we can easily point out how they can go wrong. A classic example is when you apply an algorithm to predict certain metrics for different demographic groups that we know are represented in the data. These kinds of models tend to perform better for certain groups compared to others. So, we can see how it can become problematic, especially if these models are used to support high-stakes decisions. I think that's what the whole research community, including myself, is trying to do-think about what these fairness and ethical challenges mean in practice and what we can do to improve the existing models and algorithms to handle these challenges.

What have you enjoyed about your work?

One project that I really enjoyed doing was exploring the connections among different fairness perspectives. A critical challenge that is well-known in this research area is that there are so many different ways of defining and interpreting what fairness and equity mean. Different philosophical principles govern different definitions. Sometimes, it's even the same definition, but when you apply it to different models or algorithms, the impacts or the effects can be quite different. I have been interested in that question for a long time. What can you understand about the connections and differences among these different fairness perspectives?

In a recent paper, my co-authors and I looked at this somewhat classical definition called "alpha fairness," which has a long history in philosophy and economics. We looked at this fairness definition, and then we looked at some of these newer fairness definitions in machine learning, which are called "statistical disparity." These statistical notions of the performance of an algorithm can be different for different demographic groups. We took these two fairness perspectives and did some mathematics to establish how they are connected. Our findings reveal some interesting patterns on how achieving one fairness perspective can have interesting implications for the other set of fairness definitions. This project peels the onion a little bit because everything is combined together. We are trying to see if there is a way that we can frame these very complex and subtle connections among different fairness perspectives more clearly and relate to that.

Why are Women in AI Month and other similar initiatives important?

The obvious reason is there's still an underrepresentation of women in the field, and we can feel that both at the university and industrial levels. In terms of how we can do better, we should try to engage female students early and motivate them to work in these areas. Or at least encourage them to take courses related to these topics to show them how broad the field is. AI is not just about coding or solving math problems, especially now because we have more advanced AI technologies that call for more diverse skill sets. It's not just knowing computer science. You should also know something about how we can design the system in a way that works better for customers or users with different needs.