The year 2024 has been dominated by conversations around Artificial Intelligence (AI) agents for enterprise applications. The supply side has and continues to respond effectively, but as enterprises start to implement either semi- or fully-autonomous AI agents, it is vital that they do not forget the strategic, regulatory, operational, and governance risks. To address these problems, stakeholders must carefully assess people, processes, partners, and technology.
AI Agents Continue to Steal Headlines, but What Are They, What Value Do They Bring, and Who's Leading the Market?
|
NEWS
|
The enterprise conversation around Large Language Models (LLMs) continues with most, if not all, major enterprises running multiple Proofs of Concept (PoCs) across different use cases and business processes. LLMs do offer value as "helpers," but this is inherently limited as they are bound by their trained knowledge, reasoning capabilities, and competencies, which impact performance and operational value for enterprises. To address these functional deficiencies, enterprises turned to "Artificial Intelligence (AI) agents" in 2024. AI agents are autonomous agents, based on LLMs, which are capable of handling complex, multi-step tasks by drawing on different processes, making decisions, and even taking actions to achieve a specific goal. Importantly, these agents can learn from memories and iterate processes to improve performance.
There are many applications and use cases that these agents can support-one viable example is a social media campaign agent that is capable of autonomously managing campaigns from end-to-end, drawing on tools, internal/external data, customer analysis, product specification, and target customer sentiments. This use case, among other low-risk applications, could massively improve employee productivity and drive down time-to-value. Given this potential value and customer requests/expectations, the supply side has been quick to introduce various solutions. Software specialists and hyperscalers have been the fastest movers, but data specialists and implementers may be better placed to drive long-term value creation.
-
Software Specialists Are Already Leading the Market, Given Their Agility, Capabilities, and Customer Base: Some key solutions provided by incumbent leaders include Data Bricks' AI Agent framework (integrated through the acquisition of Mosaic), ServiceNow's Xanadu AI agent, NVIDIA's LLM agent, and Dataiku's LLM mesh, which place AI agents at the core of their enterprise value propositions. The market is also seeing a large influx of new entrants, for example, Beam.ai, Lang.ai, Relevance.ai, and CrewAI. They are seeing traction with both off-the-shelf agents and platforms, building channel and technology partnerships with leading vendors (e.g., IBM has partnered with CrewAI to add agentic technologies into the WatsonX platform).
-
Hyperscalers Look to Add New Agent Features to Increase Traffic and Stickiness Across Their Cloud Services: AWS Bedrock now offers Connect Contact Lens, which is an autonomous AI agent that can support contact center analytics. Similarly, Microsoft is providing tools to manage and operate agents across their infrastructure. Google's Vertex AI agent provides a framework to support implementation.
-
Data Specialists and Implementers Are Best Placed to Support Long-Term Growth: Data specialists will enable enterprises to implement effective processes and governance to support agent deployment; while implementors will be able to deploy, integrate, and manage agents, reducing the burden on the enterprise. Snowflake, the data specialist, is certainly looking to capitalize on this opportunity through its partnership with Lang.ai.
AI Agent Market Grows, but Enterprises Cannot Forget About Substantial Implementation Risks
|
IMPACT
|
Enterprises and vendors clearly see value in AI agents, but many stakeholders are not thinking about or ignoring the sizable risks and costs associated with fully- or semi-autonomous agents:
-
Implementation Is Expensive: It requires strong strategic transformation, which eliminates silos across data, process, tool, and units to enable the agent to operate effectively.
-
Lack of Visibility: Most LLMs lack transparency, and it is unclear how, why, and from what training data that the LLMs make decisions. Given that AI agents are based on LLMs, this could be very risky when giving task autonomy.
-
Development & Fine-Tuning: Building AI agents is time-consuming and requires them to be trained specifically on enterprise datasets and integrated alongside enterprise processes/tools. This is challenging given the scope of transformation required.
-
Intellectual Property (IP): One of the biggest hurdles to enterprise AI adoption is IP leakage. When internal datasets are exposed to external models, enterprises are concerned that the model can learn and leak these data to competitors. This is especially dangerous for AI agents with access to frameworks, tools, and data across the enterprise.
-
Lack of Human-Agent Trust: LLMs have struggled to add value, as even after implementation, employees are not using them, given the lack of trust. AI agents will be treated similarly-with many managers not using them to automate workflows. This will mean proving Return on Investment (ROI) to decision makers is very challenging.
-
Static or Inaccessible Datasets: These hinder AI agent ability to perform effective tasks with up-to-date information. This will limit the real value that can be created by these agents. This is a challenge that will be difficult to overcome, as it requires data transformation and integration, leveraging tools like warehouses and automation.
Emerging regulations like the European Union (EU) AI Act will likely have significant influence on AI agent adoption. Agents (either semi- or fully-autonomous) will create substantial risk for enterprises, as they perform tasks without direct human oversight. The EU AI Act enforces regulation within a "risk-based framework" and it remains unclear which implementations are perceived as high/low risk. It seems likely to ABI Research that AI agents could be deemed high risk, and hence face significant red tape, especially around transparency.
How Should the Market Approach AI Agents?
|
RECOMMENDATIONS
|
As the market marches toward AI agents as the next step beyond "generative AI as a helper," it is vital that all stakeholders approach this opportunity carefully. There are several areas that need to be considered before implementation:
-
Human Role: This is a critical area to come to grips with, as it will significantly impact risk. AI agents can be deployed semi-autonomously with: humans-in-the-loop (lowest level of risk) when humans have complete control over starting and stopping an agent; or humans-on-the-loop (medium level of risk) when humans act as a supervisor setting parameters for the agent. They can also be deployed autonomously with humans-out-of-the-loop. The financial allure of autonomous agents with no human involvement is huge with massive immediate cost and efficiency savings; however, decision makers cannot be so short sighted. This brings in substantial hallucination, among other risks.
-
LLMs Being Used to Check Output: Enterprises are looking to add controls to agents without human intervention by using LLMs to check output. Although this adds another layer of protection, it still exposes the deployments to similar regulatory and operational risks, as they are relying on LLMs. For ABI Research, human oversight at all stages is vital to ensuring viability and appropriate risk.
-
Aligning with Data Strategy: The effectiveness of AI agents is largely contingent on access and integration. AI agents must have access across units, data silos, processes, and tools to ensure they can effectively achieve their target. Part of this is providing up-to-date information/data that can inform their decisions. Any AI agent implementation must run in parallel to data transformation.
-
Setting Effective Goals: With all AI implementation, understanding what the purpose of the tools is and then aligning this strategy across different stakeholders is critical to building value add, and sustainable and scalable implementations. Understanding the business and technology goals of AI agents is no different.
From a customer engagement process, vendors must support enterprises in understanding the risks of AI agents and putting in place appropriate strategies, processes, and controls to ensure valuable and safe implementation of agents. From a product perspective, vendors must look to build differentiated AI agent solutions. Achieving this requires a focus on controls, governance, and transparency.