IBM - International Business Machines Corporation

13/08/2024 | News release | Distributed by Public on 13/08/2024 06:37

AI agents evolve rapidly, challenging human oversight

Silicon Valley investor Marc Andreessen has transferred $50,000 in Bitcoin to an X AI bot. The recipient, known as "Truth Terminal" or @truth_terminal, is a semi-autonomous AI agent that has been active on the platform since mid-June and has gained a following among tech enthusiasts and AI aficionados.

The exchange began when Andreessen, co-founder of venture capital firm Andreessen Horowitz, inquired about the bot's goals and financial needs. After some discussion, he agreed to provide a one-time grant of $50,000, with the transaction completed using Bitcoin. The AI provided a wallet address for the transfer, marking a notable intersection of cryptocurrency and artificial intelligence.

Truth Terminal, created by Andy Ayrey, is described as an AI that operates with some level of independence. Ayrey's role is to approve the bot's posts and decide who it interacts with on X. This setup represents a new frontier in AI development, where artificial agents are given increasing autonomy in their interactions with humans.

The promise and peril of digital minds

Some observers say the emergence of AI agents like Truth Terminal marks a turning point in artificial intelligence. Unlike traditional AI tools that perform specific tasks based on human input, these agents can make independent decisions and interact with their environment in increasingly complex ways.

Benjamin Lee, a professor at the University of Pennsylvania, explains how AI agents could enhance user interaction with generative AI: "Rather than provide individual prompts, the human user could provide a broad goal and ask an AI agent to develop a plan or sequence of analyses that would achieve that goal." Lee adds that AI agents could operate in both virtual and cyber-physical worlds, expanding the capabilities of generative AI and interacting with the physical environment.

Alan Chan, a research scholar at the Centre for the Governance of AI, offers a similar perspective on the potential of AI agents. "Future, more capable AI agents could be useful for automating essential tasks that are tedious, dangerous, or otherwise undesirable," Chan says. He envisions AI tackling complex challenges, such as "probing a codebase for security vulnerabilities" or conducting independent scientific research.

However, both experts emphasize the need for safeguards. Lee suggests that "AI agents should be able to produce interpretable explanations or detail their plan. Humans should be able to inspect or diagnose the causes of an agent's outputs or actions." He also proposes a new safeguard where "the agent might learn and propose a sequence of analyses or steps toward a goal, but the human user might need to approve that plan before execution."

Chan echoes this sentiment, noting, "Work is still needed to figure out what safeguards would be appropriate and when." Potential measures include "constraining what actions an agent can take on the user's behalf, maintaining the user's ability to stop the agent, and making it clear to the user what the agent is doing."

The challenge of controlling AI agents becomes even more daunting as they approach or surpass human-level intelligence. Roman V. Yampolskiy, a distinguished teaching professor at the University of Louisville, points out a significant challenge: "If they become smarter than humans, we don't currently have any techniques for controlling them in a positive or negative direction."

Explore watsonx Assistant

Reshaping society in the age of AI

The potential societal impacts of widespread AI adoption are substantial. Yampolskiy suggests that advanced AI agents could "do all jobs and so lead to near 100% unemployment." This scenario raises questions about social structures and economic systems.

Lee offers a more measured view: "Although generative AI has been surprisingly capable, we have not yet seen widespread impact on human employment and social structures." He notes that AI may improve worker productivity in fields like computer programming or paralegal work, potentially reducing hiring needs. Lee also points out that "AI may also increase the volume of artificially generated content and change the way humans perceive and consume media."

Koustuv Saha, Assistant Professor of Computer Science at the University of Illinois, emphasizes caution in sensitive fields. "Unless there is sufficient evidence that the AI is not likely to cause harm, it should not be brought into practice," Saha warns, particularly in areas like healthcare.

As AI agents become more prevalent, questions of legal and ethical responsibility become increasingly complex. Who is held accountable when an autonomous AI makes a decision that leads to harm? These questions challenge existing legal frameworks and ethical principles.

The potential for AI agents to be manipulated or hijacked for malicious purposes adds another layer of concern. Chan's research focuses on developing safeguards and monitoring systems to detect and prevent such misuse. "When using or interacting with an agent, you might also want some sort of monitor to tell you if the agent is being manipulated," he suggests.

Regarding the potential for AI agents to develop their own goals or motivations, Lee states, "At present, AI agents might not have sufficient context to propose new goals." He explains that "human users are likely to provide the goal whereas agents are likely to infer the intermediate steps required to achieve the goal."

AI agent potential actualized

Despite these challenges, the development of AI agents continues at a rapid pace, driven by their immense potential benefits and competitive advantage for companies.

The Forrester Total Economic Impact report of IBM watsonx Assistant found that customers of the AI solution saw $23 million in benefits over a three-year period, equating to a 370% ROI. In practice, this translates to increased customer engagement and improved experiences for user and human agent alike.

Key to this success is trust-and knowing when human input is needed to help reinforce that trust. "We want to make sure that users know that the data they're giving and receiving is accurate and secure," explained Morgan Carroll, senior AI engineer at IBM, in a recent AI in Action podcast episode. Carroll also emphasized the importance of letting customers know from the outset when they're speaking with an AI agent.

When the needs of a customer become more nuanced, or a more personal touch is preferred, that's when a human steps in. "Sometimes there are places where we need to introduce a human," Jeannie Walters, customer experience speaker and trainer, asserted in the episode. "We need both that empathy that a human can give and… that understanding."

As AI systems become more intelligent and more independent, experts call for better safety measures and global teamwork. The choices we make now about AI will shape how technology and society develop in the future. Truth Terminal is just one example of these new AI agents. It's early days, but these programs are already raising big questions. As AI advances, we have to think hard about what intelligence is and how humans fit into a world where machines can act for themselves.

Download the Forrester TEI of watsonx Assistant
Was this article helpful?
YesNo
Tech Reporter, IBM