Please ensure Javascript is enabled for purposes of website accessibility

Legal experts stress need for AI policies in workplace governance

AI compliance illustration with robot arm by Depositphotos

(Depositphotos)

AI compliance illustration with robot arm by Depositphotos

(Depositphotos)

Legal experts stress need for AI policies in workplace governance

Listen to this article

Summary:

AI policies are becoming increasingly important as use in the workplace continues to grow. Whether supporting routine tasks or more complex decision-making, say organizations need clear guardrails to manage risk and set expectations.

Katarina Polozie

“It’s crucial for businesses to think about and put in place an AI Governance policy that guides their employees in smart, effective, safe,  and legal use of AI,” said Woods Oviatt Gilman partner Katarina (Kate) B. Polozie, CIPP/US, AIGP, about the importance of AI policies today.

To the extent that the business operates in a regulated area, it’s even more important to ensure that the employees are using AI in compliant manners, she added.

“A policy serves to educate the workforce on where and how it is appropriate to use , when it is not, and what the consequences are for failure to comply with the policy,” Polozie said. “An effective AI Policy combined with training on how and when to use the technology can really provide a huge source of efficiency for the workforce.”

For organizations that haven’t addressed an AI policy yet, Polozie says that they should take two steps initially – the first being an “audit” of what AI tools their employees are currently using.  This helps determine how AI tools can assist in productivity and what risks the business may be exposed to.

Second, she says that employers should be aware of any heightened regulatory context in which they may operate (such as children, education, employment, financial, health) for which there are additional privacy, confidentiality, or regulatory considerations.

“They should explore these topics with a seasoned practitioner who can make them aware of some of the heightened areas of risk to address first,” Polozie said.

For organizations deploying or using generative AI tools at work (rather than developing AI tools), Polozie says a basic AI workforce policy should include, at a minimum, the following: scope; approved tools and unapproved tools; acceptable and prohibited use cases; data protection and confidentiality; human review; intellectual property considerations; training cadence and accountability expectations; and monitoring and enforcement.

Polozie stresses that awareness and training are just as important as the policy, if not more so.

“A great policy will not be effective in the absence of training, so make sure to prioritize this annually,” Polozie said. “For employers that are creating AI tools, discuss those early with a seasoned legal professional to design the products from the outset with compliance, data privacy, and data security in mind, as well as proper IP protection.  This can avoid costly mistakes that are difficult to correct.”

Tim Plunkett, AIGP, senior counsel at Harris Beach Murtha and a member of the Industry Team, said the legal stakes around AI policies have shifted quickly as adoption accelerates.

Timothy Plunkett

“From a legal perspective, having these kinds of policies in place, it’s no longer a nice-to-have; it’s now a baseline requirement,” he said.

Plunkett said AI policies today serve a similar function to longstanding frameworks around cybersecurity, discrimination, and data privacy.

“All those policies are in place to define accountability and maintain, mitigate risks, but also to demonstrate good faith compliance when regulators come knocking,” he said, adding that even without new AI-specific laws, existing regulations already apply, particularly in areas like hiring, performance management, and termination.

At the same time, he said, most organizations are still behind when it comes to formal policies, creating a growing gap between use and governance.

“What we see consistently shows a significant gap in governance between how fast AI is being adopted and how slowly organizations are putting guardrails around it,” Plunkett said, adding that formal policies remain the exception and not the rule.

With AI use already widespread, that gap can introduce real risk, particularly when employees are using tools without clear guidance.

“There’s a lot more use of AI than there ever has been, but most employers are still behind it when it comes to policies,” he said, pointing to the rise of so-called “shadow AI” and the confidentiality and intellectual property concerns that can come with it.

For organizations that have yet to act, Plunkett emphasized that getting started does not need to be overly complex, but it does need to be intentional, with a focus on creating structure without limiting innovation.

“The goal is to not stop innovation, but to define safe, lawful, transparent uses,” he said, adding that policies should address core areas of acceptable use, including human accountability, risk assessment, data protection and transparency, as well as confidentiality and vendor risk controls.

Human oversight, he noted, is a critical part of that framework, particularly in decisions that directly impact employees.

“It’s not a defense to say the algorithm did it,” Plunkett said, underscoring that accountability ultimately rests with the organization.

More broadly, he said, AI governance is not confined to one department but requires coordination across the business.

“AI is a team sport,” Plunkett said. “Because it impacts so many different parts of the organization.”

Caurie Putnam is a Rochester-area freelance writer.

c