The EU AI Act is regulation categorising AI systems by risk level, with requirements ranging from transparency obligations to strict compliance regimes for high-risk AI. UK businesses selling to EU markets, using AI in EU operations, or building AI systems for EU deployment need to understand it.
Quick answer: The EU AI Act is regulation categorising AI systems by risk level, with requirements ranging from transparency obligations to strict compliance regimes for high-risk AI. UK businesses selling to EU markets, using AI in EU operations, or building AI systems for EU deployment need to understand it.
What the AI Act Is
The EU AI Act is the world's first comprehensive AI regulation. It:
- Categorises AI systems by risk level
- Sets requirements based on risk
- Bans certain AI applications outright
- Requires transparency for others
- Creates compliance framework for high-risk AI
Risk Categories
Prohibited AI (Banned)
Not allowed:
- Social scoring by governments
- Real-time biometric identification in public (with exceptions)
- Emotion recognition in workplace/education
- AI exploiting vulnerable groups
- Subliminal manipulation causing harm
High-Risk AI (Strict Requirements)
Categories:
- AI in critical infrastructure (transport, energy, water)
- AI in education (access, assessment)
- AI in employment (recruitment, evaluation, monitoring)
- AI in essential services (credit scoring, insurance, social benefits)
- AI in law enforcement and border control
- AI in legal/democratic processes
- Risk management system
- Data governance
- Technical documentation
- Record keeping
- Transparency to users
- Human oversight
- Accuracy, robustness, security
- Conformity assessment
Limited-Risk AI (Transparency Obligations)
Examples:
- Chatbots
- Emotion recognition systems
- Deepfake generators
- AI-generated content
Minimal-Risk AI (No Requirements)
Most AI applications fall here—spam filters, inventory management, etc. No specific requirements.
UK Position
Post-Brexit: UK isn't directly subject to EU AI Act.
But:
- UK businesses serving EU markets must comply
- AI systems deployed in EU must meet requirements
- UK is developing its own AI framework (currently lighter touch)
- Supply chain requirements may flow through
What High-Risk Means in Practice
If you develop or deploy high-risk AI:
Before deployment
- Risk assessment
- Technical documentation
- Quality management system
- Conformity assessment (self or third-party)
- CE marking
- EU registration
During operation
- Post-market monitoring
- Incident reporting
- Record keeping
- Transparency to affected individuals
For deployers (not just developers)
- Human oversight
- Input data relevance
- Monitoring for risks
- Informing affected individuals
Common Business Scenarios
HR/Recruitment AI
Using AI to screen CVs, assess candidates, monitor employees? High-risk category. Full compliance requirements.
Customer Service AI
Chatbot answering customer queries? Limited-risk. Must disclose it's AI.
Internal Analytics
AI analysing sales data, forecasting demand? Likely minimal-risk. No specific requirements.
Credit/Insurance Decisions
AI involved in creditworthiness or insurance pricing? High-risk. Full compliance regime.
Timeline
2024: AI Act entered into force 2025: Prohibited AI bans apply 2026: High-risk requirements apply fully 2027: Full enforcement
If you're deploying high-risk AI, compliance deadlines are imminent.
Preparing for AI Act
Inventory AI systems
What AI do you develop, deploy, or use? Classify by risk level.Gap assessment
For high-risk systems, assess against requirements. Identify gaps.Compliance programme
Risk management, documentation, quality systems, human oversight.Governance
Who owns AI compliance? Clear accountability.Supply chain
What about AI from vendors? Flow-down requirements.What We Help With
AI governance intersects with cyber security:
- AI security: Protecting AI systems from attack
- AI data protection: DLP for AI inputs
- AI risk management: Part of broader risk framework
- Compliance integration: AI Act alongside NIS2, GDPR, etc.
---
about AI security and compliance.
---
