Oklahoma State University

11/01/2024 | Press release | Distributed by Public on 11/01/2024 14:16

Two faces of AI: How the hidden pitfalls of AI are delaying its implementation

Two faces of AI: How the hidden pitfalls of AI are delaying its implementation

Friday, November 1, 2024

Media Contact: Terry Tush | Director of Marketing & Communications | 405-744-2703 | [email protected]

The hedonic contrast effect says that something will seem more or less enjoyable after you compare it to something else you recently experienced.

For example, the painting by your local coffee shop artist might seem extraordinary, that is, until you hang it next to a Vincent van Gogh masterpiece.

Oklahoma State University Associate Professor Dr. Zachary Arens wouldn't bill himself as an expert on the hedonic contrast effect, but he has a solid grasp on the psychological phenomenon turned marketing principle.

Recently, Arens decided to do his own study on the concept. He thought he had read all the academic literature on the subject, but on a whim, he asked ChatGPT for an exhaustive list of papers written about hedonic contrasts.

The AI chatbot went to work and, in seconds, churned out a list of publications. Arens was pleased to learn he had read most of the papers ChatGPT aggregated for him, but to his surprise, some were new and by big-name academics written in prominent journals. The veteran Spears School of Business faculty member felt a little embarrassed that he had missed these seemingly significant papers. How could this have happened, he wondered?

"I kind of freaked out," said Arens, who began frantically searching for these new articles. "Clearly, I needed to read these papers. I thought I had messed up badly."

There was only one problem - these new papers didn't exist. ChatGPT had created them out of thin air.

Arens had stumbled onto the phenomenon of AI hallucinations, where programs like ChatGPT produce false, fabricated or misleading responses based on faulty data, incorrect assumptions made by the program or biases in their datasets.

"I realized that I had been hoodwinked," said Arens, who admitted the matter had shaken his outlook on AI's usefulness in research.

Vectara, a software firm that developed a way to measure AI hallucination rates, estimates that between 3-5% of all responses from the most popular large language model programs include a hallucination. That's bad news for academics like Arens, but errors in AI programs can be the difference between life and death for those in the medical field, manufacturing centers or people who drive autonomous vehicles.

It's also far from the only hurdle AI programs face. Newspapers, book publishers and artists have lodged intellectual property disputes with AI firms that have given the public free access to their products, while AI-generated writing, photography and videos have raised legal and ethical concerns that remain largely unsolved.

Privacy and security risks abound with AI programs, which often require large amounts of data to pull from. That raises questions about how the data is collected, stored and used. AI systems can also be vulnerable to attacks and data breaches.

Just as troubling is the fact that AI programs can inherit biases from the data they're trained on. If a dataset includes human bias, the program will include that bias in the recommendations it provides. That can lead to adverse or discriminatory outcomes and can be especially problematic in fields like law enforcement, human resources and medicine.

Dr. Bryan Edwards, an OSU professor of management and the Joe Synar Chair at Spears Business, has been exploring the integration of AI into workplace systems, particularly in the context of automation and robotics. His work contributes to the broader discussion on how AI can be introduced into sectors like health care and manufacturing without displacing human workers, emphasizing the potential for technology to fill gaps and alleviate routine tasks rather than replacing existing jobs.

Edwards believes the key to widespread AI integration revolves around mitigating pitfalls and establishing trust in the software. He also believes that regulation of some sort may be the key.

When it comes to AI, Edwards said many people fear physical harm, especially regarding self-driving vehicles and autonomous robotics. Some fear that AI and automation will take their jobs, while others simply do not want to relinquish control to a machine.

Luckily, software engineers are working alongside these self-learning programs to work out the bugs and alleviate fears. The potential is simply too great, said Edwards, who thinks AI's impact on society will be profound in terms of productivity and safety.

"I think it could be seismic, at least at the level personal computing was 40 years ago and the internet was 30 years ago," Edwards said. "But first, people have to develop trust in these systems."

Photo illustration by: Artificial intelligence
Story by: Stephen Howard | Discover@Spears Magazine