Yes. AI lowers the barrier to creating malware, writing convincing phishing, and automating attacks. Attackers don't need to be experts anymore—AI helps them code, craft, and scale. Defence must adapt to faster, more sophisticated, more personalised attacks.
Quick answer: Yes. AI lowers the barrier to creating malware, writing convincing phishing, and automating attacks. Attackers don't need to be experts anymore—AI helps them code, craft, and scale. Defence must adapt to faster, more sophisticated, more personalised attacks.
What Attackers Use AI For
Malware creation and modification
AI helps attackers:
- Generate malware code from descriptions
- Modify existing malware to evade detection
- Create polymorphic malware (constantly changing)
- Debug and improve malicious code
- Bypass security controls faster
Commercial AI has guardrails. Jailbreaks exist. Underground AI models have no restrictions.
Phishing at scale
AI enables:
- Grammatically perfect phishing in any language
- Personalised content using scraped information
- Writing style mimicry
- Rapid generation of variations
- Automated A/B testing of campaigns
Vulnerability discovery
AI can:
- Analyse code for vulnerabilities
- Suggest exploits for discovered flaws
- Automate fuzzing
- Identify attack paths
Social engineering enhancement
AI powers:
- Voice cloning for vishing
- Deepfake video for impersonation
- Realistic chatbots for pretexting
- Automated OSINT gathering
What's Actually Happening (2026)
AI-assisted attacks are common
Most attacks now have some AI involvement—usually in content creation (phishing, lures) rather than pure technical exploitation.Sophistication barrier has dropped
Attackers who couldn't code can now create functional malware. The pool of capable attackers has expanded.Attack speed has increased
AI accelerates every phase: reconnaissance, payload creation, delivery, evasion. Defenders have less time.Personalisation has increased
Attacks are more targeted because AI makes personalisation cheap.Underground AI services exist
WormGPT, FraudGPT, and others—criminal AI services with no guardrails, sold as a service.How to Defend Against AI-Powered Threats
Fight AI with AI
AI-enhanced detection:
- Machine learning-based email security
- Behavioural analysis (not just signatures)
- Anomaly detection across environments
- Adaptive threat intelligence
Assume compromise
Zero Trust principles:
- Verify every access request
- Limit blast radius
- Detect post-compromise activity
- Prepare for breach
Focus on behaviour, not content
Detection evolution:
- What the malware does, not what it looks like
- User behaviour anomalies
- Network traffic patterns
- Process behaviour analysis
Strengthen authentication
Phishing resistance:
- Passkeys and FIDO2
- Phishing-resistant MFA
- Conditional Access
- Zero standing privilege
Update training
Modern awareness:
- AI-quality attacks are the norm
- Visual verification is unreliable (deepfakes)
- Verification processes essential
- Healthy scepticism
Increase verification
Process controls:
- Out-of-band verification for sensitive actions
- Multi-person authorisation
- Callback procedures
- "Trust but verify" culture
What We're Doing
We're adapting our services for the AI threat landscape:
Technology:
- AI-powered email security
- Behavioural EDR/MDR
- Anomaly-based detection
- Updated security awareness training
- Verification procedure guidance
- Incident response for AI-enabled attacks
- Monitoring AI threat evolution
- Updating defences as threats change
- Sharing relevant intelligence with clients
---
about modern threat protection.
---
