To get a complete picture, we surveyed in equal measure both working professionals and IT decision-makers, belonging to mid-sized organizations and large enterprises, across the United States and Canada. In total, 700 participants took part in the survey, with respondents working in a wide range of industrial sectors.
The IT decision-makers who we surveyed ranged from managers to C-level executives, while the working professionals belonged to various business functions within organizations, including HR, IT, marketing, production, sales, and finance.
The United States
Canada
Top industries surveyed
Education
Financial services
Healthcare
IT and telecom
Manufacturing
of IT decision-makers say they have identified
people using AI tools without authorization.
More than 60% of office workers report using unapproved tools more than they were doing so last year.
Office workers report using AI tools for several tasks despite such
tools not being authorized. The two primary tasks were
summarizing meeting notes and brainstorming ideas or content.
Summarizing meeting
notes or calls
Brainstorming ideas
or content
Analyzing data or
reports
Drafting or editing emails
or documents
Generating client-
facing content
Writing code or
debugging
Other
However, employees seem to have misplaced motivations for their continued patronage
of shadow AI. The biggest justification cited by employees is that they think "it's fine
since we are using our own devices." This was followed by their belief that the "tool was
a low-risk," and they were just "experimenting or testing."
Reasons shared by working professionals for using
AI tools without approval from their IT team
IT decision-makers were quick to point out that their organizations were likely to face several risks as a result of shadow AI. Their biggest fear is data leakage or it being exposed. Unsurprisingly, this fear is shared across mid-sized organizations and large enterprises.
The top four concerns shared by IT decision-makers overall are:
Data leakage or exposure
Employee overreliance on unvetted tools
Inaccurate outputs impacting business decisions
IP infringement or copyright issues
office workers upload sensitive and confidential information on the AI tools they use without authorization.
Fifty-seven percent of IT decision-makers surveyed state their policies and controls are built to adapt to new tools and usage patterns, indicating that they are well placed to address the challenge of shadow AI. But they also think that their organizations should look at incorporating various measures to ensure that the use of unauthorized AI tools is reduced.
What do IT decision-makers think might help reduce shadow AI in their organization?
However, employees have indicated that an alternative approach is required if they are to stop using tools without proper authorization. Two-thirds of respondents want their organizations to implement clear policies that are fair and practical.
What would make working professionals more likely to follow official AI use policies at work?
Better education on understanding the risks
Official tools relevant to my tasks
Clear policies that are fair and practical
Shadow AI is a reality in most organizations these days. The problem, as acknowledged commonly by IT leaders, is controlling the use of unauthorized AI tools. The biggest fear is that confidential organizational data is being lost or compromised. Worryingly, with the study revealing that half of employees don't seem to think there's risk associated with their behavior, it is high time that remedial action is taken by organizations. It should not only involve systemic changes, such as adapting the policies and updating frameworks, but also ensure there is incentive to find solutions together with employees.