Who EU AI Act Applies To (Scope and Territorial Reach)

Welcome to this lesson on the EU AI Act. Today, we’re going to explore how far the AI Act reaches, who it applies to, and how risk determines your obligations. By the end, you’ll not only know the law—you’ll be able to determine whether an AI system you develop, deploy, or distribute falls within the Act’s scope.

When I worked with a European fintech start-up, this was the main challenge: they had AI systems developed abroad but used in the EU. They weren’t sure if they were “providers” or “deployers,” and it almost caused a major compliance gap. By understanding the territorial scope, they avoided heavy fines and reputational damage.

Before we dive in, here are some resources to explore alongside this lesson:

Lesson Objectives

By the end of this session, you will understand:

  1. Which organisations and individuals fall under the AI Act.
  2. How the Act applies to both EU and non-EU actors.
  3. The role of the risk-based classification system in defining obligations.

The Broad Scope of the AI Act

The AI Act applies to three main categories of actors:

  1. Providers – entities that develop AI systems for placing on the market or putting into service within the EU, regardless of their location.
  2. Deployers (Users) – organisations or individuals using an AI system in the course of a professional activity within the EU.
  3. Other Relevant Actors – including importers, distributors, and authorised representatives who make AI systems available in the EU.

Example:
A Canadian software company develops an AI fraud detection system and sells it to a bank in Germany.

  • The Canadian company is a provider under the AI Act.
  • The German bank using that system is a deployer.
Here’s how regulators are going to interpret this: both the provider and deployer have obligations. Documentation, risk assessment, and post-market monitoring are required even if the provider is based outside the EU.

Territorial Reach: Beyond EU Borders

The AI Act is extraterritorial. This means it applies to:

  • EU-based actors – Any provider, deployer, importer, or distributor operating within the EU.
  • Non-EU actors – Any provider offering AI systems to the EU market, or whose AI outputs are used in the EU.
This mirrors the GDPR approach. If your AI system touches EU citizens, the law applies—full stop.

Example:
A U.S. start-up develops an AI recruitment tool used by a French company to hire staff. Even though the start-up has no offices in the EU, the Act applies because the AI system’s results are used within the EU.

When I worked with an HR tech company, we mapped every AI output to EU use. This ensured that non-EU providers were aware of obligations before any contracts were signed, avoiding regulatory surprises.

Risk-Based Categories and Their Impact on Scope

The AI Act uses a risk-based approach to determine compliance obligations:

  1. High-risk AI systems – Subject to strict requirements: conformity assessments, technical documentation, post-market monitoring, and human oversight.
  2. Limited-risk AI systems – Require transparency obligations, such as notifying users that they are interacting with AI.
  3. Minimal/no-risk AI systems – No binding obligations, but voluntary codes of conduct are encouraged.
Importantly, the scope section of the AI Act applies to all categories, but the type and intensity of obligations vary depending on risk.

Example:

  • High-risk: Credit scoring AI, autonomous driving systems, medical diagnostic AI.
  • Limited-risk: AI chatbots for customer service.
  • Minimal-risk: AI art generators for personal use.
When I worked with a European healthcare provider, classifying a medical imaging AI correctly was critical. Misclassification could have led to massive fines under Article 9 of the AI Act.

Exemptions

Some uses are explicitly excluded from the AI Act’s scope, including:

  • AI systems used exclusively for military purposes.
  • AI developed or used solely for research and development before market placement.
  • AI used by individuals for purely personal, non-professional purposes.

Example:
If a researcher is developing an AI algorithm in a university lab and it is not yet available to the public, the AI Act obligations do not yet apply.

Practical Exercise

  • Identify an AI system in your organisation. Map it to one of the actor categories (provider, deployer, importer, distributor).
  • Determine the risk classification (high, limited, minimal).
  • Note whether any exemptions apply.

Use this mapping as your first step toward building an AI Act readiness plan. Below you will find a template I’ve used with a European bank—it allows organisations to get audit-ready in under 48 hours.
  • Actor Identification Template
  • Risk Assessment Starter Sheet
  • Compliance Documentation Checklist

Key Takeaways

  1. The AI Act applies to any AI system affecting EU citizens, whether developed inside or outside the EU.
  2. Main actors in scope: providers, deployers, importers, distributors, authorised representatives.
  3. The risk level of the AI system determines the type of compliance requirements.
  4. Certain uses, such as military applications or personal, non-professional use, are exempt.
The bottom line: knowing the scope and territorial reach is the first step to avoiding regulatory fines, reputational damage, and restricted market access. For high-risk AI systems, penalties can reach up to 6% of annual global turnover, mirroring GDPR enforcement.

Liquid error: internal
Complete and Continue  
Discussion

0 comments