Importantly, the proposal does not seek to “re-open” the GDPR as a whole. Instead, it focuses on specific areas where the Commission considers that compliance costs are high, enforcement is fragmented, and the user experience has degraded. This is why the proposal combines a series of targeted reforms across core GDPR concepts and daily compliance obligations, including the regulatory approach to tracking technologies, transparency requirements, incident reporting, and areas increasingly shaped by new technologies such as AI. 

This article is the first in a series in which we take a closer look at the Digital Omnibus Proposal, following the high-level introduction set out in our pilot article. Across the series, we will explore the proposed changes in more detail and assess their practical implications for organisations.

Below, we set out the most relevant changes for organisations and what these changes may mean in practice.

First and foremost, the Commission’s Digital Omnibus Proposal includes amendments that could affect how organisations assess whether certain information even falls within scope of the GDPR. This part of the proposal has already been highly criticized by privacy activists, but appears most welcome from a business perspective.

1. Clarification of “personal data” as entity-relative (Article 4(1))

The proposal seeks to clarify the GDPR definition of “personal data” by emphasising that information is not necessarily personal data for every entity that receives it. Accordingly, information may constitute personal data for one organisation, but not for another, if the latter does not have means that are “reasonably likely” to identify the individual concerned.

In this sense, identifiability becomes entity-relative. The same dataset may be personal data for the entity that holds identifying information (or a re-identification key), while not qualifying as personal data for a recipient that lacks any realistic means of linking the data back to an individual. This approach reflects the reasoning of the CJEU in Case C-413/23 P SRB v EDPS, where the Court emphasised that identifiability must be assessed in light of the means reasonably likely to be used by the entity in question.

This clarification is particularly relevant in data-sharing scenarios. It may facilitate the sharing of pseudonymised or otherwise de-identified data between parties, where the recipient’s re-identification risk is genuinely very low. In such cases, the data may remain personal data for the original controller (meaning the GDPR is applicable), but not necessarily for the receiving party (to whom the GDPR will in such case not apply).

2. New Article 41a: pseudonymised data may fall outside GDPR for certain entities

Article 41a

The proposal also allows for the possibility that, in certain clearly defined situations, pseudonymised data would no longer be treated as personal data for certain entities. These situations would be determined on the basis of criteria developed by the Commission or the EDPB and further specified through implementing measures adopted by the Commission. For instance, pseudonymised health data held by a public authority or certified research body could be considered non‑personal data where the entity has no legal or practical means of re‑identifying individuals and where re‑identification is effectively prevented through technical and organisational safeguards, as assessed against criteria established by the Commission or the EDPB.  

If this approach develops into a workable mechanism, it could impact compliance positions in data sharing arrangements (e.g. where pseudonymised datasets are provided to service providers, research partners, or analytics vendors). That said, organisations will need to approach this possibility with caution, as supervisory authorities are likely to continue focusing on practical re-identification risks, including where datasets can be combined or enriched.

Cookies and similar tracking technologies have been at the centre of regulatory and business debate for years. Under the current EU framework, the rules are split between the GDPR and the ePrivacy Directive. Currently, the requirement to obtain consent for storing or accessing information on a user’s device is laid down in Article 5(3) of the ePrivacy Directive, while the subsequent processing of personal data is governed by the GDPR.

In practice, this dual regime has created complexity and legal uncertainty. Member States have implemented and enforced the ePrivacy rules differently, and organizations must navigate both the consent requirement under ePrivacy and the broader GDPR framework on lawful basis, transparency and accountability. At the same time, the widespread use of consent-based tracking models has led to the proliferation of cookie banners and increasing “consent fatigue” among users. While the banner requirement itself flows from the consent rule in the ePrivacy Directive, the separation between ePrivacy and the GDPR has contributed to fragmentation in interpretation and enforcement, making compliance more complex and, in some cases, overly formalistic.

The Digital Omnibus Proposal introduces a structural shift in how tracking technologies are regulated. Instead of treating cookie compliance primarily as an “ePrivacy issue”, the proposal introduces explicit GDPR articles addressing storage and access of personal data from user’s (natural persons) terminal equipment. This takes the form of new Articles 88a and 88b GDPR, incorporating a framework for when organisations may access and store personal data on a user device. In other words, the consent requirement for accessing or storing personal data in terminal equipment would be integrated directly in the GDPR.

This change is significant because it consolidates the legal framework: supervisory authorities would apply a single instrument (i.e., the GDPR) for both the access to personal data on devices and its further processing, potentially reducing fragmentation and clarifying the relationship between consent, lawful basis and accountability.

1. New Article 88a: consent remains the baseline, but with clearer exceptions

Article 88a

To start with, the proposal creates a general rule mirroring the current logic of cookie law: storing personal data on a user’s device, or accessing personal data already stored, requires the user’s prior consent. 

However, Article 88a provides a set of explicit situations in which consent is not required. These exceptions are intended to cover cases where requiring a banner would be disproportionate and does not meaningfully enhance user protection. The exceptions include cases where access or storage is necessary for transmitting an electronic communication, providing a service explicitly requested by the user, limited audience measurement under specific conditions, and security purposes. 

Article 88a also sets out specific conditions for how consent must be requested in the context of terminal equipment. Under the proposed rules, consent requests for terminal equipment access must allow users to refuse consent easily, including via a single-click refusal option (or equivalent). Arguably, this is a clear legislative push against “dark patterns” or designs that render refusal unnecessarily complex.

In addition, Article 88a introduces limitations designed to prevent repeated nudging. In particular, where a user has refused consent, the controller would not be permitted to request consent again for the same purpose for at least six months. This is intended to limit persistent prompting and reduce banner fatigue.

In practice, this means cookie banner governance will become more than a simple “banner on/off” decision. Organisations may need systems capable of remembering refusal choices, applying them consistently across sessions, and ensuring consent management solutions do not trigger repeated prompts.

2. New Article 88b: the move toward machine-readable consent (and the future of banners)

Article 88b

The other change in the proposal is the introduction of Article 88b, which signals a gradual move away from banner-driven consent (and “consent fatigue”) and toward browser-level privacy controls. Under Article 88b, controllers must ensure their online services allow users to express consent or refusal through automated, machine-readable means (such as browser/app settings), and controllers must respect those indications. This appears to imply a shift toward a default position of no consent, with users having the option to actively opt in through their settings. In practice, this could significantly reduce the role of traditional cookie banners, and may even render them largely obsolete, since relatively few users are likely to navigate to settings to enable tracking. Notably, this mirrors an earlier dynamic in which users could technically disable cookies through browser settings, but in reality, rarely did so; a gap that originally drove the widespread adoption of consent banners. Thus, Article 88a regulates the design and mechanics of banner-based consent where consent is requested directly from users, while Article 88b anticipates a more standardised, technical method of expressing tracking preferences that may, over time, reduce reliance on repeated banner interactions.

The Commission expects EU standardisation bodies to develop the technical standards needed for these mechanisms to function across browsers and services.  The proposal also introduces obligations for web browser providers (other than SMEs) to enable these technical mechanisms. Importantly, where harmonised standards are developed and followed, compliance with those standards would give rise to a presumption of compliance with the relevant GDPR requirements. This mechanism is intended to provide legal certainty and encourage consistent technical implementation across the EU.

Transparency is one of the cornerstones of GDPR. At the same time, organisations have raised concerns that in certain low-risk situations, transparency obligations can become disproportionate, especially where processing is limited and obvious, but controllers are still required to provide extensive information in ways that are difficult to deliver meaningfully in practice.

The proposal therefore modifies Article 13(4) GDPR, introducing a more explicit exemption from information obligations in a narrow category of cases. Under the revised approach, Article 13 information would not need to be provided where personal data is collected in a “clear and circumscribed relationship”, the controller’s activity is not data-intensive, and there are reasonable grounds to assume that the individual already has certain basic information about the processing (e.g., a supermarket processing certain customer personal data for payment processing when purchasing groceries). However, the exemption is carefully limited. It would not apply where the controller shares data with recipients, transfers data outside the EU, carries out automated decision-making within the meaning of Article 22(1) GDPR, or where the processing is likely to be high risk (including where a DPIA would be required). 

The message for businesses is, therefore, not that transparency becomes optional. Instead, the proposal introduces a narrow exemption where personal data is collected in the course of a routine transaction with a service provider, in which case the formal provision of Article 13 information at the point of collection may no longer be required.

Finally, the proposal also introduces further clarity for scientific research processing. It acknowledges that providing full Article 13 GDPR information may sometimes, in a scientific research context, be impossible, disproportionate, or may undermine the research objective, in which case the controller would not be required to provide the information in the usual manner (subject to the safeguards set out in Article 89 GDPR). However, appropriate alternative measures must be implemented, for example by making relevant information publicly available or otherwise ensuring that data subjects are informed to the extent possible. This is particularly relevant for research institutions and organisations conducting long-term studies where individual-level transparency may not be feasible, but safeguards can still protect individuals’ interests.

Notably, the proposal includes new GDPR provisions addressing the processing of personal data in the context of developing and deploying AI systems and models, an area where many organisations struggle with legal basis selection and appropriate safeguards.

1. AI development and legitimate interests (new Article 88c)

To start with, the proposal expressly recognises legitimate interests (Article 6(1)(f) GDPR) as a lawful basis for the processing of personal data “in the context of the development and operation of AI systems or AI models”, where appropriate and not overridden by the rights and freedoms of individuals.

This clarification is significant. It confirms that controllers may rely on legitimate interests (rather than another legal ground, such as consent), for certain AI training, testing and operational activities, provided the balancing test is satisfied. In practice, this may lower legal uncertainty around the use of personal data for AI development and reduce the need to obtain GDPR-compliant consent (which is a rather ‘volatile’ legal basis for personal data processing).

However, this recognition is coupled with enhanced safeguards. The proposal refers in particular to strict data minimisation in the selection of training and testing data, protection against residual disclosure or unintended output of personal data, strengthened transparency requirements, and an unconditional right for individuals to object.

The provision, therefore, facilitates reliance on legitimate interests for AI-related processing, but does not aim to remove the need for a careful balancing assessment and having additional safeguards (e.g., data minimization in source selection/training/testing, protections against residual disclosure, enhanced transparency, unconditional right to object). The Commission, however, did not clarify how such arrangements would be implemented or operationalised in practice; but this could for example be understood as referring to the controller’s interface design. Moreover, EU or Member State law may still require consent in specific contexts.

2. Special category data and AI: residual processing, but “non-use” as the default

Second, while setting out, as a general rule, that special category data (Article 9 GDPR) should in principle not be used for the development or operation of AI systems, the proposal introduces limited provisions clarifying how special category data may be treated in AI contexts. Taken together, the amendments aim to allow limited residual processing of special category data (e.g., health-related data or data relating to religious beliefs) in the context of developing and deploying AI systems, but only under strict conditions, namely that the controller effectively protects such data from being used to produce outputs, from being disclosed, or from being otherwise made available to third parties.

For businesses, these provisions reinforce the need for strong governance around sensitive data in AI workflows, including input filtering, model safeguards, output controls and vendor management.

Automated decision-making remains a high priority for regulators, especially as AI systems become embedded into business operations across industries. Article 22 GDPR currently provides individuals with protection against being subject to decisions producing legal or similarly significant effects that are based solely on automated processing (by in principle prohibiting it), unless one of the limited exceptions applies.

The proposal replaces Article 22(1) and 22(2), maintaining the same underlying structure. In essence, fully automated decisions with significant effects are permitted only where they are necessary for the performance of a contract, authorised by EU or Member State law with suitable safeguards, or based on explicit consent. 

The proposal however introduces an important clarification of the “contractual necessity” exception. It states that “necessity for the performance of a contract” does not require that a human could not have taken the decision. At the same time, controllers must ensure that the automated decision is genuinely necessary for the contractual purpose and that the least intrusive, equally effective solution is chosen. In other words, the necessity analysis remains substantive and cannot be satisfied merely by pointing to contractual convenience. For instance, a dating app using an automated matching algorithm to suggest potential matches based on users’ stated preferences (e.g., age range, location, interests). The automated processing may be considered necessary for the performance of the contract, because the core service promised to users is the scalable, real‑time provision of personalised match suggestions, which cannot be delivered in a meaningful way without automation.

Although the proposal does not fundamentally restructure Article 22, it provides a timely reminder for clients that compliance with automated decision‑making rules goes beyond formal legal requirements. These obligations are increasingly intertwined with wider EU regulatory developments, notably the EU AI Act, which imposes additional requirements on high‑risk AI systems (see our earlier article on the AI Act).

In practical terms, organisations relying on automation in sensitive contexts (e.g., finance, employment, insurance, or platform governance) should continue to treat Article 22 GDPR as an area of heightened compliance attention, particularly when decisions are fully automated and directly (significantly) affect individuals.

In addition to the changes in cookie consent processing, the proposal introduces practical changes intended to refine daily GDPR compliance in areas where organisations often face high administrative friction.

1. DSRs: a new “abuse of rights” ground (Article 12(5))

The proposal introduces a new ground allowing controllers to refuse to act on a request (or charge a reasonable fee) where “the data subject abuses the rights conferred by the GDPR for purposes other than the protection of their personal data”.

This amendment could become particularly relevant in scenarios involving repeated, disruptive, or strategically motivated requests. Indeed, in practice, organizations have often reported challenges with repeated, disruptive or strategically motivated requests (e.g., in the context of disputes with ex-employees or strategic litigation) where the primary purpose of the request appears to be tactical rather than data protection-related. The proposed amendment appears to respond to these concerns by expressly recognising the possibility of abusive use of data subject rights.

However, its practical impact will depend on how “abuse” is interpreted and applied by supervisory authorities and courts. Controllers should expect to justify any reliance on this ground carefully and to ensure consistent decision-making.

2. DPIAs: EU-wide harmonisation (Article 35)

The proposal aims to harmonise DPIA expectations across Member States. Under the new approach, the EDPB would compile unified lists of processing activities that do or do not require a DPIA and would develop a standard DPIA template and methodology. 

Once adopted at EU level, these lists would supersede divergent national lists, helping to ensure that organisations face the same DPIA triggers across the EU. Until then, national supervisory authority lists would continue to apply. 

For multinational organisations, this could be a meaningful simplification, particularly for cross-border projects where DPIA requirements currently vary by jurisdiction.

Under current GDPR rules, controllers must notify personal data breaches to the competent supervisory authority ultimately within 72 hours after becoming aware of the breach, unless the data breach is unlikely to result in a risk to individuals. The Omnibus Proposal adjusts this framework in two significant ways: it raises the notification threshold, and it extends the reporting deadline.

1. Raised notification threshold and extended deadline

First, the proposal raises the threshold for notifying the supervisory authority. Instead of requiring notification where a breach presents a “risk”, controllers would only be required to notify where the breach is likely to result in a “high risk” to individuals. This aligns the supervisory authority notification threshold with the existing standard for notifying data subjects.

The intention behind this change appears to be to reduce the volume of low-risk breach notifications and enable supervisory authorities to focus on more serious incidents. In practice, however, this could also lead to a significant decrease in the number of reportable data breaches.

Second, the proposal extends the reporting deadline from 72 hours to 96 hours. This shift matters because many incidents are discovered with limited initial information. Additional time may reduce premature notifications based on incomplete facts, but controllers will still need robust internal processes to assess “high risk” quickly and to document all relevant decisions. In practice, the combination of a higher threshold and extended deadline places greater responsibility on controllers to conduct and document robust risk assessments. There is a potential risk that breaches may internally be classified as “low risk” where supervisory authorities would take a different view. Controllers should therefore ensure that internal breach assessment methodologies are carefully structured, documented and consistently applied.

2. Integration with a NIS2 single entry point

The proposal anticipates that GDPR breach notifications will eventually be made through a single-entry point established under NIS2. This reflects a broader trend toward consolidated incident reporting and reduced duplication across GDPR, NIS2 and other cybersecurity regimes.

3. More harmonisation through EDPB templates and guidance

The proposal also assigns the EDPB an enhanced role in operational guidance, including producing a common breach notification template and a list of circumstances where breaches are likely to qualify as high risk (and thus notifiable). 

For organisations, on the one hand this may bring more predictability and uniformity across the EU and administrative efficiency but also potentially more structured supervisory expectations around incident classification.

On the other hand, while threshold alignment may result in “increasing” the notification threshold for some breaches, it can also result in “lowering” this threshold for other breaches which were not reportable under the current EU framework.

Conclusion

The Digital Omnibus Proposal does not change the fundamentals of GDPR compliance, but it introduces meaningful operational and conceptual shifts in areas where compliance has been most burdensome, fragmented or uncertain, for example with respect to cookie consent, data breach reporting and certain AI-related processing activities.

For organisations, the proposal’s clearest message is that the EU is moving toward:

  1. Fewer consent prompts but more meaningful and enforceable consent choices.
  2. More standardisation and automation in consent and compliance mechanisms.
  3. Closer alignment between privacy, cybersecurity and emerging technology regulation.

For compliance officers, it means that they will be very busy during the next compliance review cycle.

This article is the second in our Digital Omnibus Proposal series. Stay tuned for the next publication, in which we will focus on the proposal’s targeted adjustments to the AI Act and their implications for organisations developing and deploying AI systems.