Please ensure Javascript is enabled for purposes of website accessibility

The risks of shadow AI in the workplace | Managers at Work

The risks of shadow AI in the workplace | Managers at Work

Listen to this article

“One of our staff people was seen at a meeting using ChatGPT to help with some work on one of his projects. And the question came up later – Do we have a company policy on using ChatGPT and tools like it for work projects? And if so, what kinds of policies do we need?”

It’s so easy, isn’t it, to use ChatGPT to help with a complex project? But you might wonder, how many of your employees are using personal AI tools to work on your company projects?

And if that’s happening, what should you do about it?

These are tough questions that many employers are beginning to address.  We’re talking about employees who – for a variety of reasons – are using unapproved AI tools at work and about the development of potential policies/standards around the unsanctioned use of these tools.   Called “,” this practice is raising a lot of questions and concerns across the IT, human resources, cyber security and legal worlds.

“Shadow AI isn’t a technology problem – it’s a trust problem. What we’re seeing across workplaces right now is employees quietly to AI tools that leadership hasn’t approved, not because they’re trying to be rebellious, but because they’re trying to survive the pace of work,” says Jason Greer, founder and president of Greer Consulting Inc., an consulting firm.

“The reality is that shadow AI is simply the modern version of what happens when innovation moves faster than policy. Nearly half of employees using at work are doing so through personal accounts, which means that sensitive company data may be flowing into systems with zero oversight. When that happens, you’re not just risking productivity gaps – you’re risking data leaks, compliance violations, and reputational damage that can cost organizations hundreds of thousands of dollars per breach.”

Besides keeping up with the pace of work, employees have other reasons for keeping their AI use quiet. And that includes the fear of judgment or pushback at work and the lack of a clear policy on what’s allowed at work, according to a recent piece in Forbes on hidden AI use by Emily Lewis-Pinnell, founder of EVAILA, which helps mid-market and growth stage companies adopt AI.

“This gap between policy and practice means leaders can’t see where AI in the workplace is already delivering value, or where is needed to reduce risk.”

Without management, the use of shadow AI can create significant risks in , bias and accuracy, from potentially flawed or biased information, and security. “Unapproved tools can be Trojan horses for malware or create new cyberattack entry points,” she writes.

The growth in unsanctioned use of AI tools at work is already visible in surveys. A survey by Gusto, a platform providing human resources and payroll services to small businesses, showed that some 45 percent of US workers have used AI at work without telling their employers about it. More than half of those surveyed said their productivity would drop without AI. And some 66 percent pay for these tools themselves.

Looking at it from a specific sector, a survey by the Association of Health Care Journalists found that 57 percent of health care professionals have used or encountered unauthorized shadow AI. Another survey by International Data Corp. (idc.com) showed that 65 percent of employees are using these tools with 39 percent using free, unapproved AI at work. And another 17 percent pay for their own tools.

So with this rapid growth of unsanctioned AI use, how do you begin to address it?

Using extra surveillance or implementing bans won’t be effective, experts say.  “If employees don’t feel safe admitting how they use AI, you fail to manage the risks you can’t see and you fail to capture the value you don’t know exists,” says Khullani Abdullahi, founder and principal of Techne AI, a Chicago firm specializing in AI governance, risk and compliance and host of the AI in Chicago podcast.

She said she has seen cases where an employee discovered a 40 percent efficiency gain with an AI tool that couldn’t be scaled across the company because it was invisible. “The most effective policies start with psychological safety and transparency, not punishment.”

Before putting any policies together, Abdullahi suggests companies take several steps. One is to define the AI tools and skills that employees need by department and role to establish an baseline. Another is putting together an with multiple stakeholders that has buy-in from the C-suite and the board of the company. And another is putting together a comprehensive “Compliance Register” that maps every applicable regulation (on local state, national and international levels) and becomes the foundation for governance conversations.

“The biggest mistake I see in Human Resources and leadership teams is treating AI policy as an IT issue,” she says. “Shadow AI is a culture problem, a literacy problem and a trust problem. The policy has to address all three.”

One strategy for addressing unauthorized “shadow AI” is to provide employees with the best company-approved AI tools, says Jason Boyce, founder and CEO of Avenue7Media in Portland, Ore. The biggest risk that their company faces is if an employee shares client data with an unauthorized AI tool. That is prohibited activity and there are severe consequences for any violation, he says.

“However, selecting the best AI tools, making them available to your team, and requiring 100 percent adoption is a great way to prevent shadow AI,” Boyce says. In addition, team members are encouraged – during regular staff meetings — to highlight any new AI tools they find that would be worth vetting for the whole organization. “Making it part of the process ensures that we properly vet new tools that seem to be popping up every week at this point.”

A strategic marketing firm in Philadelphia, called Marketri, also took a pro-active approach by creating an AI Council that reviews AI use cases internally and looks at issues around risk.  The company also invested in a senior director of AI that owns AI at the executive level and developed its own internal centralized system to support AI operations, rather than relying on different third-party systems. It also created guardrails to protect client data, company data and help integrate AI with current business processes, says Debra Andrews, founder and president of Marketri.

“Shadow AI is a sign about the speed of an organization’s adoption of AI and it typically results from a company either moving too fast without putting proper guidelines in place or that it is too slow at adoption and employees are using AI whether it’s been implemented or not,” she says. “Often times, employees who feel pressure to produce increased results with fewer resources look for their own tools.”

“The greatest threat posed by the use of shadow AI is the fragmented manner in which employees use AI and the exposure of company data.”

Managers at Work is a monthly column exploring the issues and challenges facing managers. Contact Kathleen Driscoll with questions or comments by email at[email protected]

a