21/11/2024 | News release | Distributed by Public on 21/11/2024 17:07
Your new car can park itself and apply the brakes to avoid collisions. Your email and messaging apps suggest responses and word completion. Machines can process MRI images and recognize cancer and other conditions. And ChatGPT can you help write any type of content.
Our news feeds are flooded with the latest advancements in artificial intelligence (AI). As the field continues to grow, who's keeping an eye on what best practices should look like?
To that end, Northwestern is collaborating with the Digital Intelligence Safety Research Institute at Underwriters Laboratories Inc. through the Center for Advancing Safety of Machine Intelligence (CASMI), a research hub that seeks to better incorporate responsibility and equity into the technology of AI.
"Through this partnership, we've combined our missions to not only create new technologies, but to think about their impact in a way that preserves human safety," said CASMI director Kristian Hammond.
CASMI supports research to better understand machine learning systems - the data that underpins them, the algorithms they use and how they interact with human users. The center also supports projects that make sure researchers use those facts to evaluate systems to help ensure they are beneficial to all.
"The promise of AI is the notion that we can scale ourselves so we can have a bigger and better impact on the world, and CASMI is working to develop the kinds of guardrails that will help us realize this goal without compromising our well-being," said Hammond, a computer scientist and the Bill and Cathy Osborn Professor in the McCormick School of Engineering.
Northwestern Now sat down with Hammond to discuss some of the basics of AI, where the dangers are and what CASMI is doing to identify and address these issues.
We hear so much about it, but what exactly is AI? How is it present in our daily lives?
AI is the development of systems that perform behaviors that, if a human were to perform them, would be considered intelligent. So, we're trying to map, take all the skill, all of the essence of intelligence in human beings, and put it in a machine.
AI is everywhere. You've probably interacted with it a dozen times today already. You're texting somebody, and the system suggests sentence completion. That's AI. It's learned from seeing a whole lot of words how to finish sentences. You go to Amazon, Netflix or LinkedIn where products, shows and resources are suggested for you based on your previous engagement. That's AI. You use something like Alexa or Siri, and it recognizes what you're saying. That's AI. Or a basic Google search - even if it doesn't do a summary but just presents you a set of possibilities, AI is behind that.
Tell us more about CASMI.
We're concerned about the ways AI systems could cause harm in the world. We believe the precursor to thinking about whether you are doing responsible development of AI or ethical development of AI is to understand what its impact is in the world. So, we're looking at the nature and the causes of harm at the level of the individual, larger groups and society in general. We figure out where in the systems these harms were caused, and then we try to create best practices for how to avoid such harms in the future.
What does well-deployed AI look like? Or what does ill-deployed AI look like?
AI's ability to manage large spaces of possibility in a way that we can't actually gave us the first round of COVID-19 vaccines. The reason the development was so fast is because we were able to use computational methods with AI to get from point A to B. Think of it this way: When we have another smart person in the room, things can get better - as long as they join the conversation but don't dominate it. That's how you want to think of AI. Who wouldn't want another smart person in the room at any given time?
Poorly deployed AI can weaken us in terms of our ability to think and our ability to move through the world and express ourselves. There are ways you can take AI and turn it into something that absolutely helps us to actualize ourselves as human beings. But it's also the case that we can be substantially diminished in one way or another.
Can you give an example?
Think about word completion while texting. It's certainly easier to speed up the chat by just clicking the completed word or phrase. The danger is it's predicting the most likely replies. If you say yes once, and it works, then you quickly develop the habit of saying yes over and over. Now, your communication is no longer just you - it's you and the machine.
And that's not okay?
It's okay, except now you're not making as many personal decisions as you used to make. And once you stop making decisions, you're ceding control and waiting for recommendations. Communication is becoming more standardized because of predictive systems like this or ChatGPT, to cite another example.
Again, this is not necessarily a crisis in and of itself, but when you spin it out, you see that your view of the world becomes narrower over time. What I see in my news feed is different from what you see. It creates balkanized outlooks of what actually is happening in our local communities and around the world.
And, when real communication becomes more difficult, then cooperation gets harder. Finding common ground is harder.