Decentralised supervision and coordinated enforcement

With the entry into force of the EU Artificial Intelligence Act (AI Act) in August 2024, the substantive rules governing artificial intelligence apply directly across the European Union. Yet the practical effectiveness of the AI Act depends, to some extent, on national implementation.

At EU level, the AI Act is enforced by the AI Office. Its work is supported by three advisory bodies: the European Artificial Intelligence Board, the Scientific Panel and the Advisory Forum. At national level, Member States cooperate with the European Commission and the AI Office and must designate competent authorities to organise supervision, and to provide the tools needed for enforcement and coordination. On 20 April 2026, the Netherlands took an important step in this process by publishing a draft Implementation Act (Uitvoeringswet AI-verordening), together with an explanatory memorandum (Memorie van Toelichting), setting out how AI governance will be organised at national level.

A central policy choice underlying the draft legislation is the adoption of a decentralised supervisory model. Rather than establishing a single new AI authority, the Netherlands assigns supervisory responsibilities to multiple existing regulators, each operating within the domain they already oversee. This approach reflects an understanding that AI risks materialise in concrete sector‑specific contexts, including finance, healthcare, infrastructure, employment and public administration. Authorities already active in these sectors are, therefore, regarded as suitably positioned to evaluate the interaction between AI systems and existing legal frameworks, risks and safeguards.

To mitigate the risk of fragmentation inherent in a decentralised model, the draft Implementation Act places strong emphasis on legally mandated coordination. National competent authorities and market surveillance authorities will be required to conclude formal cooperation protocols, setting out arrangements for information‑sharing, coordination of supervisory activities and, importantly, a consistent interpretation of key concepts under the AI Act. Decentralisation is thus explicitly paired with institutionalised coordination.

Dutch supervision and regulators

The purpose of the Implementation Act is to provide for effective, coherent, efficient, and independent market supervision of compliance with the AI Act in the Netherlands. The proposed supervisory framework consists of eight market surveillance authorities, two of which assume a coordinating role and one of which acts as a central contact point. The market surveillance authorities are expected to work closely with existing sectoral and domain‑specific supervisory authorities, as well as with the authorities designated as fundamental rights authorities under existing EU law. No new supervisory body will be established.

  • The Dutch Data Protection Authority (AP) and the State Inspectorate for Digital Infrastructure (RDI) jointly ensure coordination for an effectively functioning AI supervisory framework. The RDI acts as the central contact point.
  • The AP is designated as the market surveillance authority for prohibited AI practices, the majority of high‑risk AI systems listed in Annex III of the AI Act (areas of application), and the supervision of the transparency requirements imposed by the AI Act on certain AI systems. Within the AP, AI market surveillance will be organised in an autonomous, parallel, and clearly visible manner in relation to the AP’s other existing tasks, such as supervision of GDPR.
  • The RDI and the Inspectorate for the Environment and Transport (ILT) are designated as market surveillance authorities for high‑risk AI systems in critical infrastructure.
  • For the supervision of high‑risk AI systems listed in Annex I, Section A, of the Regulation (products), the five existing market surveillance authorities for those products are designated, namely the RDI, ILT, the Health and Youth Care Inspectorate (IGJ), the Netherlands Food and Consumer Product Safety Authority (NVWA), and the Netherlands Labour Authority (NLA).
  • The Authority for the Financial Markets (AFM) and the Dutch Central Bank (DNB) are designated to supervise high‑risk AI systems, prohibited AI practices, and transparency requirements in the financial sector.
  • For the supervision of high‑risk AI systems that are developed and used for the benefit of part of the judicial authorities in the context of the administration of justice, the Procurator General at the Supreme Court (PG HR) and the President of the Administrative Jurisdiction Division of the Council of State (Vz ABRvS) are designated.
  • The authorities that currently supervise the protection of fundamental rights and are therefore designated as fundamental rights authorities under the AI Regulation are the Netherlands Institute for Human Rights (CRM), the AP in its capacity as data protection authority (GBA), the PG HR, the President of the ABRvS, the management board of the Central Appeals Tribunal (CRvB), and the management board of the Trade and Industry Appeals Tribunal (CBb).

Beyond the institutional set‑up, the draft Implementation Act also establishes a detailed enforcement and sanctions framework. Market surveillance authorities will be able to impose administrative fines up to the maximum amounts set out in Article 99 AI Act. They may also order corrective measures, suspend or prohibit the use of AI systems, mandate withdrawal from the market or issue public warnings. The sanctioning regime closely follows the AI Act’s risk‑based structure, with separate treatment for prohibited practices, infringements relating to high‑risk AI systems and breaches of transparency obligations.

Finally, the explanatory memorandum makes clear that the Netherlands has deliberately opted for a restrained approach to national implementation. Accordingly, to avoid regulatory duplication, the Implementation Act does not introduce additional substantive requirements beyond those imposed by the AI Act itself. Where AI systems are already governed by existing sectoral legislation (e.g., product safety, medical device, financial services frameworks), the AI Act’s requirements will be integrated into those regimes.

For further information, please refer to our earlier publications on AI.