Baker & Hostetler LLP

10/16/2024 | Press release | Distributed by Public on 10/16/2024 09:03

AI Governance, Risk and Opportunity at the LitForward Innovation Summit

10/16/2024|2 minute read
Share

From October 1 to 3, the LitForward Center for Technology, Research and Analysis convened its First Annual LitForward Innovation Summit. The first session of the summit, "AI Governance: Managing Risk and Opportunity," presented an active discussion of artificial intelligence (AI) and the practical realities of what clients have been asking for. This led to a friendly discussion and semi-debate among the summit participants regarding the uptake of AI generally within the legal field and what client expectations would be for the uses of AI both within client organizations and for work performed for clients by outside counsel.

The AI discussion was loosely based on the conceit that people are generally "cognitive misers." That is, instead of actively making decision-making more complicated (and thorough) by breaking tasks down to first principles and building up a supported logical approach, cognitive misers utilize time-saving mental shortcuts known as heuristics; social priming; and schemas, scripts and stereotypes. Or, as David L. Hull related in 2001, "the rule that human beings seem to follow is to engage the brain only when all else fails - and usually not even then."

Given this reality and the rapid uptake of generative AI programs, it was posited that not only would AI be adopted more broadly within organizations but that it also might lead to more instances of "zombie IT," as originally addressed in a BakerHostetler attorney-authored article in early 2017 (Ransomware - Practical and Legal Considerations for Confronting the New Economic Engine of the Dark Web). This made for a fascinating connection among organizations seeking technologies that would output work product more quickly (i.e., AI), require a smaller head count and lessen the cognitive load on those limited individuals working with the AI programs. These points proceeded to examine technologies already alive (if not well) within organizations that had been operating under a similar approach.

Given that context, the discussion participants considered what had happened to company websites that had been "spun up" but then left to languish. While some websites dated back to the early 2000s, over the course of the intervening decades layers of tracking and commerce technologies had been layered into those sites' operations. Until the law and plaintiffs' counsel caught up, these technologies operated without the organizations' knowledge and oversight, new initiatives were spun up while others were abandoned, staff came and went, and a pandemic reset what company interactions and communication could mean.

It was asserted that zombie websites were a harbinger of what might come from AI implementation; instead of stand-alone passive websites that slowly accrete tracking technologies over time, AI operating on its own would generate new operational capabilities (and similar concerns) but at a much faster rate and much more "intentionally," given how AI would be actively engaging with individuals or other electronic systems. The discussion further explained the view that new AI laws do and will provide for increased notice requirements, which in turn require organizations to determine how those technologies do and will work before implementing them.

The discussion ended with a claim that regulators, judges and ultimately the public would understand that, if AI is sometimes viewed as opaque or the proverbial black box, contemporaneous documentation by the organization responsible for the AI would be critical in proving that a technology was operating responsibly. The summit participants analogized these requirements to existing legal scholarship relating to the disclosure of seed sets within e-discovery and technology-assisted review, which BakerHostetler attorneys explored in 2018.