While the AI Act is set to be fully applicable as of 2 August 2026, certain provisions have already taken effect. A first set, including general provisions on AI literacy and the prohibition of certain practices that are deemed to involve inacceptable risks, came into force on 2 February 2025. As of 2 August 2025, a second wave of provisions becomes applicable.
Gradual entry into force of the AI Act
As noted above, the entry into force of the AI Act takes place in different steps, allowing for companies to adapt their processes and procedures in a gradual manner. The provisions that entered into force on 2 August 2025 are the following:
- Notified bodies (Chapter III, Section 4),
- GPAI models (Chapter V),
- Governance (Chapter VII),
- Confidentiality undertakings for supervisory authorities and notified bodies, with a view to protecting a.o. IP and trade secrets (Article 78), and
- Penalties (Articles 99 and 100).
What are the most important obligations applicable per August 2, 2025?
Providers of general-purpose AI (“GPAI”) models
The AI Act defines a general-purpose AI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.
A typical example are generative AI models that can perform content-creation tasks, such as the technology underlying ChatGPT, or certain image or video generaring tools. To recall, a “provider” is the person, body or entity that develops a general-purpose AI model or that has such a model developed and places it on the market in the EU or into service under its own name or trademark, whether for payment or free of charge.
Key requirements for providers of GPAI models include:
- Maintaining up-to-date technical documentation for the purpose of providing that information upon request to the EU AI Office and national competent authorities,
- Sharing information with downstream providers of AI systems using their models,
- Putting a policy into place to comply with EU copyright law,
- Publishing a summary of the data used for training the GPAI model (note: the EU AI Office is working on a template), and
- Designating an EU legal representative (if based outside the EU).
Providers of GPAI models released as from 2 August 2025, have to comply immediately. Providers of GPAI models already released prior to this date, have until 2 August 2027 to comply.
The purpose behind these obligations is to allow downstream providers to integrate such GPAI models into their own AI systems in a manner that allows them to fulfil their own obligations under the AI Act.
Exceptions for open source GPAI models
Some of the obligations listed above do not apply if the model is released under a free and open-source license and its parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.
Additional obligations for GPAI models with “systemic risks”
For GPAI models presenting “systemic risks”, additional obligations apply, including a mandatory notification to the European Commission.
“Systemic risks” are risks of large-scale harm from the most advanced (i.e., state-of-the-art) models at any given point in time, which cause them to have high-impact (or equivalent) capabilities. Such risks can manifest themselves, for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous general-purpose AI models. Which models are considered general-purpose AI models with systemic risk may change over time, reflecting the evolving state of the art and potential societal adaptation to increasingly advanced models. For now, there is a legal presumption that any general-purpose AI model has high impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 10(^25). Currently, this means that general-purpose AI models with systemic risk are developed by a handful of companies, although this may change in the future.
Providers of this type of GPAI models must assess and mitigate systemic risks, in particular by performing model evaluations, keeping track of, documenting, and reporting serious incidents, and ensuring adequate cybersecurity protection for the model and its physical infrastructure.
Code of Practice
On 10 July 2025 the EU AI Office published a voluntary Code of Practice for GPAI providers. It covers transparency, copyright, and safety, and offers a structured pathway to demonstrate compliance under the AI Act.
After the code is endorsed by the European Commission and the Member States (by an adequacy decision), AI model providers who adhere to the code are deemed to have demonstrated compliance with the AI Act. This will reduce their administrative burden and enhance legal certainty.
What about the provisions on governance and enforcement?
By 2 August 2025, Member States were expected to have designated their national competent authorities, including both notifying authorities and market surveillance authorities, to have communicated these designations and the tasks of these authorities to the European Commission, and made the contact details publicly available.
Below, we have included an update on Loyens & Loeff’s home markets:
- In Belgium, while no official text has been released, the government has previously announced that the BIPT (Belgian Institute for Postal Services and Telecommunications, the telecom regulator) would be designated as the national regulator under the AI Act.
- In the Netherlands, the government has not yet formally published official designations of its national competent authorities, nor has it published any national penalty rules or guidance. Nevertheless, it is worth noting that the Dutch Data Protection Authority (Autoriteit Persoonsgegevens - AP) and the Dutch Authority for Digital Infrastructure (Rijksinspectie Digitale Infrastructuur - RDI) published an advisory report on how the Netherlands should structure national oversight of AI. Central to the proposal is the designation of the AP and RDI as coordinating authorities. Other bodies such as the Human Environment and Transport Inspectorate (Inspectie Leefomgeving en Transport), the Authority for the Financial Markets (Autoriteit Financiële Markten - AFM), the Dutch Central Bank (De Nederlandsche Bank - DNB), and the Health and Youth Care Inspectorate (Inspectie Gezondheidszorg en Jeugd) are assigned oversight based on domain expertise, particularly for areas like critical infrastructure, financial services, and healthcare. The AP is also proposed as the primary authority for enforcing bans on prohibited AI practices, with cooperation from AFM and the Authority for Consumers and Markets (Autoriteit Consument & Markt) where financial or consumer protection concerns arise. Additionally, the AP would supervise compliance with the AI Act’s transparency obligations, including rules on chatbots and synthetic content, while the AFM and DNB would cover AI systems in the financial sector. Given the many authorities and competencies involved, the report finally emphasizes the urgent need for a legal framework enabling inter-agency cooperation and data sharing. It calls for the rapid designation of supervisory authorities and sufficient funding to support implementation.
- In Luxembourg, draft bill No. 8476 is currently under review. It proposes to designate the CNPD (Commission nationale pour la protection des données - CNPD, the data protection regulator) as both the national competent authority and single point of contact under the AI Act. More precisely, the CNPD would act as the market surveillance authority by default, while sector-specific regulators – inter alia, the CSSF (Commission de surveillance du secteur financier) or the CAA (Commissariat aux assurances) – would retain oversight within their respective domains, depending on the area of application of the AI system, or the entity deploying it.
Sanctions and national enforcement frameworks under the AI Act
In addition, by 2 August 2025, Member States were expected to lay down more specific rules on financial penalties and other enforcement measures, which may also include warnings and non-monetary measures, applicable to infringements of the AI Act.
The AI Act itself already provides:
- That infringements of the provisions regarding prohibited AI systems may trigger administrative fines up to EUR 35 million or 7% of global turnover;
- That infringements of certain other (exhaustively listed) provisions of the AI Act [note: most of the provisions listed will only enter into force on August 2, 2026] may trigger administrative fines up to EUR 15 million or 3% of global turnover;
- That the supply of incorrect, incomplete or misleading information to notified bodies or national regulators may trigger administrative fines of up to EUR 7,500,000 or 1% of global turnover;
- That, in each such case, the highest of the two amounts applies as a monetary cap, except in case of SME’s (for them, the lowest of the two amounts applies); and
- That these fines are to be imposed by competent national court or regulator, as determined at national level by local implementing legislation (see below).
Futhermore, the AI Act also provides that the European Commission (i.e., the national authorities are not allowed to do so) may impose on providers of GPAI models fines up to EUR 15 million or 3% of their global turnover, whichever is higher, in case of certain listed acts of non-compliance. Although the provisions and obligations for GPAI models entered into force on 2 August 2025 (see above), these sactions will however only become applicable next year (August 2, 2026).
This article will be updated from time to time to include relevant developments in the Benelux jurisdictions.