Askeal Logo

EU AI Act

The EU AI Act is the world’s first comprehensive law regulating artificial intelligence systems, with an emphasis on trust, transparency, and security.

What is the EU AI Act?

Proposed in 2021 and nearing final adoption in 2024, the EU AI Act introduces a risk-based framework for AI systems. It establishes obligations based on the risk level of the AI, ranging from minimal risk to unacceptable risk. The Act emphasizes ensuring AI systems are secure, transparent, and resilient against manipulation or misuse.

EU AI Act

The EU AI Act is the world’s first comprehensive law regulating artificial intelligence systems, with an emphasis on trust, transparency, and security.

Table of Contents


What is the EU AI Act?


Proposed in 2021 and nearing final adoption in 2024, the EU AI Act introduces a risk-based framework for AI systems. It establishes obligations based on the risk level of the AI, ranging from minimal risk to unacceptable risk.

The Act emphasizes ensuring AI systems are secure, transparent, and resilient against manipulation or misuse.

Scope and applicability


The EU AI Act applies to:

  • Providers: organizations that develop AI systems.
  • Deployers: entities using AI in EU markets.
  • Importers and distributors: organizations placing AI systems on the EU market.

It covers a wide range of applications, but with strict requirements for high-risk systems such as biometric identification, healthcare diagnostics, and AI in critical infrastructure.

Key requirements


  • Risk-based classification: minimal, limited, high, and unacceptable risk categories.
  • High-risk AI obligations: rigorous testing, documentation, human oversight, and security measures.
  • Data quality and governance: training data must be relevant, representative, and free from bias.
  • Transparency: users must be informed when interacting with AI.
  • Security and resilience: systems must be designed to withstand manipulation and cyber threats.
  • Enforcement and penalties: fines can reach up to €35 million or 7% of global annual turnover.

Impact on SecOps


For SecOps teams, the AI Act introduces new operational challenges:

  • Secure AI lifecycle: SOCs must ensure that AI models and data pipelines are monitored for tampering.
  • Incident detection: adversarial attacks against AI systems, such as data poisoning, require new detection capabilities.
  • Compliance support: SOCs must provide logs and monitoring evidence to support audits.
  • Vendor oversight: organizations using third-party AI must verify compliance with the Act.

The AI Act brings AI security firmly into the domain of operational cybersecurity.

Further reading