EU AI Act
The world's first comprehensive AI regulation. Uses a risk-based classification system.
Risk Tiers
Unacceptable (Banned): Social scoring, real-time biometric surveillance (with limited exceptions).
High-risk (Strict compliance): Employment screening AI, credit scoring, medical devices, law enforcement, critical infrastructure.
Limited risk (Transparency obligations): Chatbots must disclose AI use, deepfake generators must label output.
Minimal risk (No requirements): Spam filters, AI in games.
Key Requirements for High-Risk Systems
- Risk management system throughout lifecycle
- Data governance and documentation
- Technical documentation and record-keeping
- Transparency and information to users
- Human oversight measures
- Accuracy, robustness, and cybersecurity
Timeline
- February 2025: Prohibited practices take effect
- August 2025: General-purpose AI rules apply
- August 2026: Full high-risk AI requirements apply
Impact on Security Teams
The Act explicitly requires cybersecurity measures for high-risk AI systems. AI security testing, red teaming, and vulnerability management become compliance requirements for organizations deploying high-risk AI in the EU.