Shadow AI is the use of unauthorised AI tools by employees—ChatGPT, Claude, Gemini, and hundreds of others. It's the new shadow IT, but faster-moving and higher-risk. Your data is being pasted into AI tools you don't control, can't monitor, and didn't approve.
The Scale of the Problem
Studies suggest 70-80% of employees use AI tools at work. Most organisations have approved zero of them.
What's happening right now:
- Developers pasting source code into AI assistants
- Sales teams uploading customer data for analysis
- HR using AI to draft sensitive communications
- Finance asking AI to analyse confidential figures
- Legal reviewing contracts with public AI tools
Why Shadow AI Is Worse Than Shadow IT
Speed of adoption: Shadow IT grew over years. Shadow AI went from zero to everywhere in months.
Data exposure: Shadow IT was often storage or productivity tools. Shadow AI is specifically designed to ingest and process your data.
Training risk: Public AI tools may train on your inputs. Your proprietary data could influence responses to competitors.
Invisibility: Traditional security tools don't see AI tool usage well. It looks like normal web browsing.
Productivity trap: AI tools genuinely help people work faster. Blocking them creates friction and workarounds.
What's Being Leaked
We've seen organisations discover:
- Source code repositories pasted into ChatGPT
- Customer databases uploaded for "analysis"
- M&A documents summarised by public AI
- Medical records used to draft communications
- Legal contracts reviewed by AI tools
- Board materials processed externally
How to Find Shadow AI
Microsoft Defender for Cloud Apps:
- Discovers cloud app usage including AI tools
- Shows who's using what
- Quantifies data transfer
- DNS queries to AI domains
- Traffic analysis to known AI services
- Browser activity to AI sites
- Copy/paste to AI interfaces
- Ask people what they're using
- Anonymous often gets more honest answers
The Response Strategy
Don't just block
Blocking AI creates:- Workarounds (personal devices, mobile data)
- Resentment
- Lost productivity
- Even deeper shadow usage
Provide approved alternatives
- Microsoft Copilot with enterprise protection
- Azure OpenAI for developers
- Vetted tools with proper contracts
- Clear guidance on what's allowed
Implement controls
- DLP for AI tools
- CASB policies for cloud AI
- Browser controls for sensitive contexts
- Monitoring and alerting
Create governance
- AI acceptable use policy
- Approval process for new tools
- Risk assessment framework
- Regular review of AI landscape
Quick Wins
This week:
- Run a discovery scan (Defender for Cloud Apps or similar)
- Quantify the problem
- Identify biggest risks
- Draft AI acceptable use policy
- Deploy approved alternative
- Begin user communication
- Full DLP implementation for AI
- Monitoring and reporting
- Ongoing governance programme
What We Help With
- Shadow AI discovery - Find out what's actually being used
- AI security strategy - Policy, controls, and approved tools
- Technical implementation - DLP, CASB, monitoring
- Copilot deployment - Secure alternative rollout
