Quick Answer
Shadow AI is the use of AI tools without IT/security approval—ChatGPT, Claude, Gemini, and hundreds of others. Employees are pasting company data into tools you don't control. It's shadow IT's dangerous cousin, with direct data leakage risk.
The Scale of the Problem
The reality:
- 70%+ of employees use AI tools at work
- Most organisations don't know which tools
- Most AI use is unapproved
- Sensitive data is going into public AI daily
- Marketing pasting customer insights
- Developers sharing code
- HR drafting with employee information
- Finance analysing sensitive numbers
- Legal reviewing contract details
Why Shadow AI Is Different
Shadow IT was bad. Shadow AI is worse.
| Shadow IT | Shadow AI |
|---|---|
| App might have security flaws | Data actively leaves organisation |
| Data stored in unknown location | Data potentially trains AI models |
| Risk if app is breached | Risk by normal use |
| Mostly productivity tools | Directly handles sensitive content |
| Slower adoption | Viral adoption |
The Data Problem
What goes into public AI:
- Customer names and details
- Financial information
- Strategic documents
- Source code
- Employee data
- Contract terms
- Product plans
- AI provider's servers
- Potentially training data
- Possibly retained indefinitely
- Unknown geographic location
- Outside your control
- GDPR: Personal data leaving your control
- Contractual: Customer data in unauthorised systems
- Regulatory: Sensitive data handling violations
- Competitive: Trade secrets at risk
Common Shadow AI Tools
The usual suspects:
- ChatGPT (free version)
- Claude (free web)
- Google Gemini
- Microsoft Copilot (consumer version)
- Perplexity
- Character.ai
- Dozens of others
- Notion AI
- Canva AI
- Grammarly
- Otter.ai
- Countless SaaS tools adding AI features
Discovery: Finding Shadow AI
Microsoft Defender for Cloud Apps
Discovers cloud app usage including AI tools. Reports on:- Which AI tools are accessed
- Who's using them
- How much data is going there
- Risk scores for discovered apps
Network/Proxy logs
If you control the network:- DNS requests to AI domains
- Traffic volume to AI services
- Web filtering logs
Surveys (supplemental)
Ask employees what they use. You'll get partial information.Endpoint monitoring
EDR can detect:- Browser activity to AI sites
- AI application installations
- Copy/paste into AI tools (some products)
Control: Managing Shadow AI
1. Provide approved alternatives
Give people safe options:- Microsoft Copilot for Microsoft 365 (enterprise protection)
- Azure OpenAI Service
- Enterprise ChatGPT/Claude (where appropriate)
2. Policy
Clear acceptable use:- Which AI tools are approved
- What data can never go into AI
- How to request new tools
- Consequences of violation
3. Technical controls
Block or control:- Web filtering for unapproved AI
- DLP detecting data going to AI tools
- Conditional Access restricting access
4. Training
Educate users:- Why this matters
- What's safe vs risky
- How to use AI responsibly
- How to anonymise when using AI
The Business Case
Don't just lock down. Enable responsibly.
AI genuinely helps productivity. Blocking it entirely:
- Frustrates employees
- Drives shadow AI deeper underground
- Loses competitive advantage
- Creates adversarial relationship
- Acknowledge AI's value
- Provide enterprise-grade tools
- Set clear boundaries
- Monitor and enforce
What We Help With
We're helping clients tackle shadow AI:
- Discovery: What AI is actually in use
- Policy: AI acceptable use development
- Technical controls: DLP, web filtering, Copilot deployment
- Training: AI security awareness
