12/11/2024 | News release | Distributed by Public on 12/11/2024 11:27
Establishing a governance structure for artificial intelligence is essential today. Before committing to any specific technology, organizations should evaluate a potential policy's risks and benefits to create maximum opportunity for successful outcomes.
INTRO
Establishing a governance structure for artificial intelligence is essential today. Before committing to any specific technology, organizations should evaluate a potential policy's risks and benefits to create maximum opportunity for successful outcomes.
On this episode of We Get AI for Work, we discuss why organizations should set up effective governance structures, form a multidisciplinary governance committee, and develop AI policies to address confidentiality, accountability, and compliance.
Today's co-hosts are Eric Felsberg, principal in Jackson Lewis' Long Island office, and Joe Lazzarotti, principal in the firm's Tampa office and co-leaders of the firm's AI Group.
Eric and Joe, given that there are many features to consider when creating an effective governance structure, the question on everyone's mind today is: Why should organizations have a structure in place before adopting and utilizing AI technology, and how does that impact my organization?
CONTENT
Eric J. Felsberg
Principal and Artificial Intelligence Co-Leader
Hello, everyone, and welcome back to our next episode of We Get AI for Work. My name is Eric Felsberg. I'm joined by my partner Joe Lazzarotti. On this episode, we're going to be talking about creating an effective governance structure when dealing with artificial intelligence.
This is something you and I have been speaking to a lot of employers about - we certainly see how critical establishing a structure is in this space. But one of the things that I've seen is I don't think that a lot of employers think about all the features of a governance structure before they jump in and start using some of these AI platforms that are very attractive and more and more readily available.
So, Joe, I guess a good place to start is: When thinking about a governance structure, why should we do it? Why should we have a structure in place?
Joseph J. Lazzarotti
Principal and Privacy, Data and Cybersecurity Co-Leader
I totally agree, Eric. It's a conversation that a lot of organizations are having. Some organizations, they may be more developed and this is how they approach a lot of different technologies, not just AI: "Okay, we want to have an objective and we want to think about it? What are the risks and how do we best achieve results and how can we measure success?" Other organizations see these technologies as: "Hey, this is a great use, let's just run with it." Maybe they don't have the infrastructure.
In either case, both organizations want to maximize the technology. These AI technologies bring enormous benefits and can help in many ways. But they also come with a lot of risk in terms of managing data, assessing accountability, liability, compliance. So, if you don't have an appropriate governance around AI, you may not realize many of those risks and you may not also be able to capture a lot of the benefits that are available.
Felsberg
I think that's right. And I don't think this needs to be overly complicated, either. There are some really simple steps that seem perfectly obvious, but sometimes are just completely missed when going down this path. The simplest of them is to take an inventory. We are flooded on our emails and everywhere else about all sorts of new AI tools - and some of them are amazing. You have to think about all the folks in your organization that are also getting those emails and also getting those alerts about a new AI platform. They may say, "Hey, let me jump on and let me try this out. And, hey, this is great, right? It's really helping me streamline and be a lot more efficient." They never think to alert the organization, "I'm using this thing" and they may not be completely aware of all the risks.
That's why we think a good first step is to take an inventory of all the AI tools that are out there - just so you can get your hands around exactly what's being used so that they can be properly evaluated.
Speaking of evaluation, who should do that? Who's going to own this process? Who's going to do that inventory? Once we get our hands on it and we think about using additional AI tools, who should own that? Should it be IT? Should it be HR? Should legal do that? Maybe there's a compliance function that needs to get involved.
Joe, one of the things that you and I often talk about and we talk about with our clients is developing a governance committee that would own this function. If we do establish a committee, who should be on that? Who are the different stakeholders that should be part of this committee?
Lazzarotti
It's a great point. Let me get back on what you said about taking an inventory, because it's really important. It's really happening in a lot of cases because it's not just an inventory of what you're using. It's also how you're using it and what happens down the road. We've encountered situations - and it's not just AI. Managing technology on Day One, you may use it for a certain purpose and then maybe six or 12 months down the road, the vendor, if you're using a vendor, offers a different iteration of that tool or a different feature of that tool that comes with different considerations. And the same vetting process that may have happened on Day One doesn't happen on day 180 or 360. Those use cases, if you will, or iterations of those use cases, don't get the same attention and could create issues.
To help solve that, you're right. Having some type of a committee or whatever you want to call it, some multidisciplinary group of folks, if that's appropriate to the organization, can really be critical to evaluating it. I think a lot about who should be on it. What I see is that a lot of times this gets delegated to the IT department for obvious reasons - and you have to have IT. But a lot of it, when you think about governance, is the organization wants to think about: "What are we going to use it for? What are the use cases?" Because if you're using generative AI to help enhance your marketing function, HR is not going to have as much input there. But at the same time, if you're using AI to help make decisions about candidates, then maybe you do want to have HR there. Maybe you don't always want to pull some people in and out as you go, so having someone from IT and having HR and marketing operations and legal - there's a lot of different places to look for important stakeholders. But I think it's going to start with: What are the objectives of the organization in terms of how they want to use a particular AI application or tool? That's a good starting point, at least, to kind of understand who should be involved in the process.
Felsberg
Yes, I think that's right. You have to think different organizations are going to use AI differently. Given the practice we are in, a lot of employers are using it for personnel selection purposes - can we hire the most efficient person or promote the most efficient candidate or whatever it may be. Others are using it more to perform a main element of their business, to streamline some of their functions and processes. And so, I agree that the organizations have to think about how is this being used. That will help you put together this puzzle as to who should be on this committee that we're proposing here to evaluate the use not only of the AI now that they're using, but future uses.
A lot of the organizations I work with, they're all a little different but, for the most part, look very similar. It's pretty common, to echo what you were saying, to have somebody from legal, human resources, if it's being used for human resource function; if it's not, it's being used more for a business purpose, maybe having someone from the business, compliance side. Certainly, IT. These are the folks that are going to understand a lot of times how these tools actually are operating. That's certainly important to have. So, I would agree with that.
This should not be kind of an insular committee. It's a good idea to have a core group that evaluates these tools, but they have to communicate with the rest of the organization as to expectations, permissible use cases, impermissible use cases, and so forth. You and I, Joe, spend a lot of time talking with company representatives about reducing all of this and memorializing it into an AI policy. This technology is developing so rapidly that it really needs to be, it sounds a bit cliché, a living document that is nimble and can change as new technologies evolve.
I know we're going to be covering AI policies in an upcoming episode, but, Joe, if we're thinking about coming up with a policy, just in very broad terms, what are some of the features that you would expect to see in an AI policy?
Lazzarotti
Certainly topics like confidentiality, ensuring accuracy, dealing with company IP, accountability, transparency. I'm seeing a lot in policy, but I also want to take a step back and, again, just think about the use-case issue and how you may need to drive a particular policy.
As an example, take an AI tool - maybe it's a dash cam that the health and safety group in an organization decides will be helpful to ensure the safety of drivers and to minimize insurance costs. These tools have AI capabilities: They might record voices, they might tell whether an employee is wearing their seatbelt, they might be able to understand how the vehicle is being driven - a whole host of really interesting technologies. It may seem like that's a really important and valuable use case for the company, saves the company money, protects the drivers, protects the public, but HR is never involved in that. And so, from a policy perspective, say, "Well, how do we make sure that you've gotten appropriate consents if you're recording voices and what type of data is collected, if any, on those devices? If we're collecting biometric information, do we need any consent from the employee if we're doing a facial scan?"
The point is, just in that situation, if you have one group in an organization that's thinking about this, rolling something out that has an impact on employees in some way and HR has never really consulted or legal has not consulted, you really could have some risks. A lot of the inclination is "Hey, we want a policy to give to employees to govern the end user, the driver of that truck where the dash cams used. But there's almost maybe an idea of having an internal policy where you're trying to direct the governance of who the people are who are adopting and implementing these applications to ensure that they're getting and doing all the right things that they need to do to minimize the risk of developing and, in most cases, implementing these kinds of technologies.
So, when we talk about policies, there's an opportunity to say "Well, yes, of course we want to explain to employees and explain transparency and confidentiality and IP and getting approval for use cases that they want to use." But then there's a need to internally have some policy around what individuals and departments in an organization should be doing and how do they go about rolling it out before you get to the policy for the end-user employees. That's just how I'm seeing some of that develop in terms of managing this.
Felsberg
Your comments underscore this notion of, as you think through these issues, you need to bring in the different stakeholders to help you think through this - and just in how you just described it a moment ago: I need folks from the business. I need legal in that situation. I may need HR and so forth.
On a related note, the employees have to be made aware of how AI is being used and the expectations as related to their use of AI. And so, an important part of this is also thinking about training. Once we nail down exactly the AI that we're going to support and monitor and implement in the organization, training employees as to the permissible and impermissible uses of that AI technology really is a critical part of governing this whole initiative.
Now, Joe, switching gears a little bit here. I know that we run up against a lot of times what seems like an age-old question even though AI is relatively new in our space: Should we buy an AI platform or should we build something in-house? A lot of more sophisticated organizations may have the talent in-house that can build some of these AI platforms and they may be very good. That opens up a whole other host of issues for us to think about in terms of the identification of use cases and also this question of liability. So, talk a little bit about how we think about this question of liability when you're dealing with either a third-party vendor or you're building a platform within the four walls of your organization.
Lazzarotti
Yes, you hit on it about understanding. There's a lot that goes into this that may be beyond the scope of this discussion, and we can certainly dig into it more in a later episode. I ask a lot of times when I'm presenting to HR leaders, "Do you feel comfortable being able to evaluate people on your IT team, particularly the ones that are really driving that group?" A lot of times there's some scratching of their heads, saying, "Well, no. Computers come on in the morning, right? So, everything must be working the way we want it to be." Particularly in this case, this is complex stuff. The persons who are doing this and developing these tools are just brilliant and they really are advancing the ball in a lot of ways. But the ability to understand that and whether it's being done accurately and that we can feel confident rolling it out? That may not be something that internally in the organization they're able to assess the performance of those tools. So a really important component is understanding your capabilities internally to be able to then make a decision about outsourcing it or buying.
But then even if you do use a vendor, there's a lot of vendors that are running to market to take advantage of a lot of the demand for these tools. And this question about are they like any type of product you buy - are they what they say or what they're being promised? It's really important to make sure that if you're doing that, you're vetting those vendors, testing the tool, asking the right questions, getting some help to know what questions to ask. Those are really important things because only then can you really evaluate "Hey, does it make more sense? We're not finding a vendor that we feel confident about. We feel like maybe we could do it internally." You have to weigh that. But only after really assessing your internal capabilities and then maybe what vendors are doing and what you feel comfortable with, can you really decide.
One key question that you also mentioned, Eric, is who owns the liability? And I think that's another big question.
Felsberg
Just on that last point, especially when you're using a third-party vendor to provide some of these AI technologies: In our modern world, we're often confronted with terms and conditions. Just as in our everyday life, you want to download something, you want to use a new technology, you get these terms and conditions. Because it is, a lot of times, from a legal perspective, very dense, the everyday person may not want to necessarily trudge through all of these terms and conditions. But when you're implementing an AI tool in the workplace, it's really important that you understand exactly what, from the vendor's perspective, this AI is intended to be used for, how it's going to be used. Are there liability issues? Have they addressed that in the terms and conditions? Again, it really needs to be scrutinized. Have discussions with the vendors.
This is a rapidly developing area. Some of the issues that legal may be thinking about may never have occurred to folks on the development side, and vice versa. So, certainly important to think about.
Joe, before we close out this episode, any last-minute comments, words of advice?
Lazzarotti
Only that there's no time like the present to really be thinking about things. For the listeners out there, you may be surprised how many employees may actually be using some of these tools and not even know it.
Felsberg
Yes, yes.
Good discussion as always, Joe. To our listeners, if you have any questions about anything that we discuss or if there's a topic out there that you've been thinking about and you would like us to discuss it, by all means, please reach out to us: Email us at [email protected].
We look forward to hearing from all of you. Until our next episode, thanks for listening and we'll be back with you soon.
OUTRO
Thank you for joining us on We get work™. Please tune into our next program where we will continue to tell you not only what's legal, but what is effective. We get work™ is available to stream and subscribe to on Apple Podcasts, Libsyn, SoundCloud, Spotify and YouTube. For more information on today's topic, our presenters and other Jackson Lewis resources, visit jacksonlewis.com.
As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client-lawyer relationship between Jackson Lewis and any recipient.