Jackson Lewis LLP

24/07/2024 | News release | Distributed by Public on 24/07/2024 18:34

MYR 2024: Regulating Workplace AI

Details

July 24, 2024

By almost any measure, 2024 is a memorable year for employment and labor law - and it's only halfway done. Our timely report, Mid-Year 2024: Now + Next, takes a closer look at the recent rules, regulations and rulings affecting employers today, the rest of the year and beyond.

Transcript

Welcome to Jackson Lewis's podcast, We get work™. Focused solely on workplace issues, it is our job to help employers develop proactive strategies, strong policies, and business-oriented solutions to cultivate an engaged, stable, and inclusive workforce. Our podcast identifies issues that influence and impact the workplace and its continuing evolution and helps answer the question on every employer's mind, how will my business be impacted?

No matter the month or year, employers can count on one thing, changes in workplace law. Having reached the midway point of the year, 2024 has proven to be no exception, with many significant changes and potential challenges ahead for employers. What follows is one of a collection of concise programs as We get work™; the podcast provides the accompanying voice of the Jackson Lewis 2024 Mid-Year Report, bringing you up-to-date legislative regulatory and litigation insights that are taking place now and what you can expect next in the second half of this year. We invite you and others at your organization to experience the report in full on JacksonLewis.com or listen to the podcast series on whichever streaming platform you turn to for compelling content. Thank you for joining. <_o3a_p>

Eric J. Felsberg
Principal and AI Services Group Co-Leader<_o3a_p>

Hello everyone. We're happy to have all of you with us. I'm also thrilled to be joined today by my colleague, Robert Yang.

Robert Yang
Associate, CIPP/US

Hello everyone. As part of my practice, I too focus on AI related issues and data security and privacy matters.

FELSBERG<_o3a_p>

So Rob, we're here today to talk about one of the most rapidly developing technologies that has significantly disrupted many aspects of not only our lives, but more importantly for our discussion today, the workplace. And I know it's a favorite topic of ours and that topic is artificial intelligence or AI.

Now, while AI can be incredibly helpful, its use has materially impacted how employers manage the workplace. Regulators are racing to regulate its use just as quickly as new technologies are emerging.

YANG<_o3a_p>

That's right, Eric. There seems to be no limit to how AI is being used in the workplace. Of course, we continue to see employers use traditional AI to process and interpret data, but more recently, we're seeing generative AI being used to leverage and create new [uses].

We're also seeing more employers use AI to perform tasks traditionally performed by humans. For example, we're seeing AI being used heavily in making hiring decisions, identifying internal talent and predicting attrition, just to name a couple of use cases. However, many employers may still be unsure about the possible pitfalls about AI use.

Felsberg<_o3a_p>

I think that's right. And that's where the regulators are really focusing their efforts. A primary area of concern for regulators is whether these tools, when used, for example, for hiring decisions, cause disparate selection rates between and among various demographics - think race, sex, or ethnicity.

And we've seen regulatory action on the federal, state and local levels. To be sure, President Biden recently issued an executive order, and it wasn't too long ago, that was aimed in part at AI safety and security standards.

We've also seen the Wage and Hour Division of the U.S. Department of Labor issue a field assistance bulletin addressing AI issues as they relate to hours worked and FMLA compliance.

And for our federal contractor friends, the Office of Federal Contract Compliance Programs or OFCCP is also interested in the use of AI systems as well, especially around employee selection decisions, such as hiring and promotions. That's where the OFCCP is really focusing, and employers that are undergoing OFCCP compliance reviews have to expect increased interest into their AI systems to be sure.

We started seeing this increased interest into AI systems, associated bias concerns and their compliance with various laws such as the uniform guidelines on employee selection procedures - for those of you that are federal contractors, those guidelines should be near and dear to your heart because they touch upon selection mechanisms, issues of disparate impact and issues of validation.

So indeed, employers will have to analyze their selection mechanisms for evidence of disparate impact. They'll need to think about what measures should be used for doing so and whether it should conduct these analyses under privilege. And employers must also think whether they should seek to have these AI tools validated or at a minimum secure technical report.

Again, for employers that have undergone OFCCP compliance reviews, this is just the traditional way that the OFCCP looks at selections, right? If there's disparate impacts statistically in hiring, the same type of inquiries will be raised, namely around what types of selection mechanisms are being used and are those selection mechanisms valid?

And we don't see this any differently given the introduction of AI. We think at its very core, it is the same approach. So, at a minimum, employers should be discussing these issues, not only with their legal advisors, but also the providers of these platforms. And often these discussions will focus on the technical rigor of the tool.

YANG<_o3a_p>

Wow. That's a lot to think about. And as you mentioned, employers should be aware of laws on the local level.

We've seen places like NYC regulate automated employment decision tools in the workplace. We have Colorado that's about to regulate high-risk AI systems.

And in California where I practice, there is a bill in the legislature, I believe it's SB 942, that would require generative AI systems with over one million monthly visitors or users to provide an AI detection tool. This tool will be used to verify whether content was generated by the system.

The bill also would require that AI-generated content have a visible and difficult to remove disclosure. And on top of that, employers should still be aware of all the data privacy pitfalls that may exist when using the AI tools.

Employers need to really think about the future of AI, where it's going to head and how it can present additional challenges in the employment context. Just to name a couple of use cases, we have biometrics, deepfakes and AI note takers that are pretty sure to take center stage soon. So, Eric, what's an employer to do?

FELSBERG<_o3a_p>

Certainly because of these issues and many more that we haven't discussed, the decision to leverage AI in the workplace should not be taken lightly. Employers are challenged at this point because they must navigate this patchwork of rapidly developing AI regulations. And while you may be tempted, we don't think it's as easy as just simply saying to your workforce and your workplace, no AI allowed. We don't really think that that's a practical solution because AI is not a fleeting concept, right?

AI is going to be around; I think we all can agree it's going to be here to stay. And I think the only question is how much is AI going to develop; how much is it going to reach into the workplace and start handling tasks that traditionally we never really envisioned AI handling? So, it's kind of an interesting and exciting time, but at the same time, it's challenging for employers.

One of the things that we typically recommend is that employers should think long and hard about having a policy in place. And that policy would provide the guardrails for AI usage. And it could address items such as what's approved - what types of tasks may an employee in the organization use AI to complete? Maybe there is a section of the policy where a committee is established that has responsibility for monitoring the development of AI, approving new platforms and just keeping up to date on new developments, not only for use in the workplace, but about how are we going to kind of provide the guardrails for their usage.

And of course, things like data security requirements, limits on the scope of data. For example, you don't want confidential data being perhaps loaded into an AI system, because you may not necessarily know where those data are going to be stored. Like a lot of what Rob does around data privacy, that's a big concern for us and for all employers.

But the policy should be a living document that is subject to frequent update. As we've said a couple of times already, this area is developing so rapidly that as soon as you have a policy in place, you almost have to immediately think about, well, do we need to update that policy because the technology has changed, the regulations have changed so rapidly. So it's really important for employers to have an agile approach, but certainly think about having a policy in place that governs AI usage.

Rob, what other points do you have to add to that?

YANG<_o3a_p>

In addition to all the new regulations that we've been seeing, we're starting to see a lot of more enforcement actions being initiated. For example, in California, we have the CCPA and although enforcement action has been limited to the privacy space, we're going to see it more in AI as well because of the new regulations that are coming out. For employers to leverage AI in the workplace, it's really better to partner with the workforce to identify the best AI solutions to leverage. Usually this is by way of an AI use policy. This step will likely help an employer fare better in enforcement action. And in addition, employers should really prioritize transparency when they implement these AI tools, clearly communicate with employees how the AI is being used and ensure that the AI algorithms are fair.

And employers should really not forget the ethical implications of AI, monitor for bias and establish accountability mechanisms that includes human oversight.

FELSBERG<_o3a_p>

Thanks, Rob. We've offered a lot of information for employers to think about. I, for one, am excited about, you know, the coming months and what they hold in terms of AI development as we move into the second part of the year. It's certainly an exciting and perhaps overwhelming time. Some of these best practices that we've mentioned, hopefully we'll find some employers who find them valuable.

YANG<_o3a_p>

Agreed. It's really hard to cover all the emerging developments in this area in just a couple of minutes. To our listeners, please feel free to reach out to Eric or me or any of the Jackson Lewis attorneys that you've been working with. Thank you.<_o3a_p><_o3a_p>

Thank you for joining us on We get work™. Please tune into our next program where we will continue to tell you not only what's legal, but what is effective.

We get work™ is available to stream and subscribe on Apple Podcasts, Libsyn, SoundCloud, Spotify, and YouTube.

For more information on today's topic, our presenters, and other Jackson Lewis resources, visit JacksonLewis.com. As a reminder, this material is provided for informational purposes only. It is not intended to constitute legal advice, nor does it create a client-lawyer relationship between Jackson Lewis and any recipient.