Press ESC to close or Enter to search

Home
About Us
Services
Pricing
Tools
Resources
Contact
Get Started
Live Security Feed
Your IPDetecting...
NCSCUK organisations urged to strengthen cyber defences ALERTPhishing attacks targeting Microsoft 365 users on the rise CISACritical vulnerabilities identified in popular software NEWSRansomware groups increasingly targeting SME businesses NCSCNew guidance released for securing remote workers ALERTBusiness email compromise attacks cost UK firms millions CISAZero-day exploits require immediate patching attention NEWSAI-powered threats becoming more sophisticated in 2025 NCSCUK organisations urged to strengthen cyber defences ALERTPhishing attacks targeting Microsoft 365 users on the rise CISACritical vulnerabilities identified in popular software NEWSRansomware groups increasingly targeting SME businesses NCSCNew guidance released for securing remote workers ALERTBusiness email compromise attacks cost UK firms millions CISAZero-day exploits require immediate patching attention NEWSAI-powered threats becoming more sophisticated in 2025
View Dashboard
Cyber Security

What Is the EU AI Act and Does It Affect My Business?

Quick Answer

The EU AI Act is regulation categorising AI systems by risk level, with requirements ranging from transparency obligations to strict compliance regimes for high-risk AI. UK businesses selling to EU markets, using AI in EU operations, or building AI systems for EU deployment need to understand it.

Quick answer: The EU AI Act is regulation categorising AI systems by risk level, with requirements ranging from transparency obligations to strict compliance regimes for high-risk AI. UK businesses selling to EU markets, using AI in EU operations, or building AI systems for EU deployment need to understand it.

What the AI Act Is

The EU AI Act is the world's first comprehensive AI regulation. It:

  • Categorises AI systems by risk level
  • Sets requirements based on risk
  • Bans certain AI applications outright
  • Requires transparency for others
  • Creates compliance framework for high-risk AI
It's about AI systems, not just chatbots. Many business applications fall within scope.

Risk Categories

Prohibited AI (Banned)

Not allowed:

  • Social scoring by governments
  • Real-time biometric identification in public (with exceptions)
  • Emotion recognition in workplace/education
  • AI exploiting vulnerable groups
  • Subliminal manipulation causing harm
These are outright prohibited in the EU.

High-Risk AI (Strict Requirements)

Categories:

  • AI in critical infrastructure (transport, energy, water)
  • AI in education (access, assessment)
  • AI in employment (recruitment, evaluation, monitoring)
  • AI in essential services (credit scoring, insurance, social benefits)
  • AI in law enforcement and border control
  • AI in legal/democratic processes
Requirements:
  • Risk management system
  • Data governance
  • Technical documentation
  • Record keeping
  • Transparency to users
  • Human oversight
  • Accuracy, robustness, security
  • Conformity assessment
This is where most compliance effort sits.

Limited-Risk AI (Transparency Obligations)

Examples:

  • Chatbots
  • Emotion recognition systems
  • Deepfake generators
  • AI-generated content
Requirement: Must disclose that users are interacting with AI or that content is AI-generated.

Minimal-Risk AI (No Requirements)

Most AI applications fall here—spam filters, inventory management, etc. No specific requirements.

UK Position

Post-Brexit: UK isn't directly subject to EU AI Act.

But:

  • UK businesses serving EU markets must comply
  • AI systems deployed in EU must meet requirements
  • UK is developing its own AI framework (currently lighter touch)
  • Supply chain requirements may flow through
Practical reality: If you do business with EU or deploy AI in EU, you need to understand the AI Act.

What High-Risk Means in Practice

If you develop or deploy high-risk AI:

Before deployment

  • Risk assessment
  • Technical documentation
  • Quality management system
  • Conformity assessment (self or third-party)
  • CE marking
  • EU registration

During operation

  • Post-market monitoring
  • Incident reporting
  • Record keeping
  • Transparency to affected individuals

For deployers (not just developers)

  • Human oversight
  • Input data relevance
  • Monitoring for risks
  • Informing affected individuals

Common Business Scenarios

HR/Recruitment AI

Using AI to screen CVs, assess candidates, monitor employees? High-risk category. Full compliance requirements.

Customer Service AI

Chatbot answering customer queries? Limited-risk. Must disclose it's AI.

Internal Analytics

AI analysing sales data, forecasting demand? Likely minimal-risk. No specific requirements.

Credit/Insurance Decisions

AI involved in creditworthiness or insurance pricing? High-risk. Full compliance regime.

Timeline

2024: AI Act entered into force 2025: Prohibited AI bans apply 2026: High-risk requirements apply fully 2027: Full enforcement

If you're deploying high-risk AI, compliance deadlines are imminent.

Preparing for AI Act

Inventory AI systems

What AI do you develop, deploy, or use? Classify by risk level.

Gap assessment

For high-risk systems, assess against requirements. Identify gaps.

Compliance programme

Risk management, documentation, quality systems, human oversight.

Governance

Who owns AI compliance? Clear accountability.

Supply chain

What about AI from vendors? Flow-down requirements.

What We Help With

AI governance intersects with cyber security:

  • AI security: Protecting AI systems from attack
  • AI data protection: DLP for AI inputs
  • AI risk management: Part of broader risk framework
  • Compliance integration: AI Act alongside NIS2, GDPR, etc.
We help organisations understand AI risks—regulatory and security—and implement appropriate controls.

---

about AI security and compliance.

---