Artificial intelligence adoption among businesses has accelerated significantly in recent years. According to a 2024 Eurostat survey (link), 41.17% of large EU enterprises used AI technologies in 2024, compared to 30.40% in 2023. The 2025 Stanford AI Index (link), which uses a broader definition of AI adoption, often including experimental applications and generative AI tools across all types of organisations globally, reports an even higher uptake. According to this survey, 78% of organisations worldwide used AI in 2024, compared to 55% in 2023.

Regionally, reported adoption rates in 2024 were 80% in Europe (57% in 2023), 82% in North America (61% in 2023), and 72% in Asia-Pacific (58% in 2023). The extent of AI use and the purposes for which enterprises adopted AI technologies varies across economic sectors.

The growing importance of AI within businesses has not gone unnoticed by the Dutch competition authority, the Authority for Consumers and Markets (ACM). As early as 2020, the ACM published a position paper on the supervision of algorithms (link).

In that paper, the ACM noted that algorithmic applications developed by market participants can undermine the proper functioning of markets. Algorithmic systems are of particular relevance to the ACM when they are used in activities that may distort competition. The ACM’s examples include algorithmic systems that set prices, influence supply and demand in the market, or give rise to price discrimination or collusive behaviour among market participants.

Competition authorities are increasingly scrutinising how AI is being integrated into commercial practices. For companies, this makes it essential to understand which applications may trigger concerns. The examples below highlight AI uses that authorities view as potentially anticompetitive, helping businesses identify risks early and implement appropriate compliance measures.

Possible examples of anticompetitive conduct through the use of AI

Algorithmic coordinated conduct

Automated pricing systems that rely on available market data can detect and respond to price deviations, thereby making explicit collusion between companies more stable, for example, by supporting resale price maintenance or price-fixing agreements. An often quoted example of this type of agreement is the “online sales of posters and frames” case (link) which was handled by the UK Competition & Markets Authority (CMA). In that case, two companies infringed competition law by agreeing not to undercut each other’s prices on Amazon’s UK marketplace. Both companies agreed to have specific software developed which ensured that this price agreement was adhered to. They used software to monitor and adjust their prices to ensure neither business went lower than the other. Consequently, the CMA fined one of the companies, and its managing director was disqualified from serving as a director of any UK company for five years.

In case companies use the same third-party pricing software that influences their pricing decisions, this can create a hub-and-spoke structure that facilitates indirect information exchange and coordinated market behaviour. A hub-and-spoke structure refers to an arrangement where a central undertaking (the hub) facilitates or enforces parallel conduct among multiple independent undertakings (the spokes) through vertical agreements, thereby creating an indirect horizontal coordination.

Two examples are the Samsung (link) and LG (link) cases handled by the ACM.

The ACM fined Samsung for attempts to influence resale prices of televisions between 2013 and 2018. Samsung monitored retailers’ online prices using spider software and other automated tools to track deviations from its desired price level. When prices dropped below Samsung’s target, Samsung contacted retailers to push them to raise prices. This systematic monitoring and intervention meant Samsung effectively controlled resale prices, violating Dutch and EU competition law.

Similarly, the ACM fined LG for unlawful resale price influencing arrangements with seven major retailers between 2015 and 2018. LG gave recommended prices but also requested retailers to disable spider software that automatically adjusted prices based on competitors. This was intended to prevent downward price competition and ensure retailers adhered to LG’s fixed prices. LG monitored compliance using online tools and actively intervened when retailers deviated, which ACM concluded was not mere advice but a binding arrangement restricting price freedom.

Samsung and LG both filed appeals with the District Court of Rotterdam, but the appeals were dismissed and the fines were upheld (EUR 39,875,500 for Samsung and EUR 7,943,500 for LG). However, note that further appeal to the Trade and Industry Appeals Tribunal (CBb) is still possible.

Self-learning autonomous algorithms may independently learn to coordinate on anti-competitive outcomes, without any direct information exchange or explicit agreement between companies. This happens because these algorithms are designed to optimise objectives such as profit or market share and continuously learn from market feedback. By processing vast amounts of data in real time, they can instantly adjust prices and strategies based on competitors’ behaviour.

Algorithmic exclusionary conduct by dominant companies

Self-preferencing occurs when a company with a dominant position favours its own or affiliated products and services over those of competitors, meaning that rankings or recommendations are not based on competition on the merits. The main concern is that the company may leverage its dominance in one market to foreclose rivals in a related market, whether downstream or in a complementary market. This issue has been most frequently examined in the context of search, recommendation, and allocation algorithms.

An example is the “Google Shopping” case (link). The European Commission found that Google had had abused its dominant position. Central to this finding was that Google granted its own comparison-shopping service an unjustified advantage in two ways. First, the demotion of Google Shopping’s competitors: competing services were shown only as general search results (i.e., simple blue links without rich features), causing their pages to be pushed down the rankings by Google’s adjustment algorithms. Second, Google applied preferential treatment to its own shopping service, prominently displaying it with rich features at the top of search results. Google did not apply these same algorithms to its own Google Shopping service.

Pricing algorithms enable personalised pricing and algorithmic targeting. A company can engage in price discrimination if it has a degree of market power, a mechanism to target customer prices, and an estimate of each customer’s willingness to pay. The combination of sophisticated pricing algorithms and detailed consumer profiles has made first-degree price discrimination increasingly feasible. This is also possible without market power, but in that case it is unlikely to be effective  or may even be pro-competitive and, in any event, it is not prohibited.

Predatory pricing means pricing below costs by a dominant undertaking with the aim of driving rivals off the market. Algorithms make this strategy more effective by identifying and targeting marginal customers, those most likely to switch, while avoiding losses on inframarginal customers who are unlikely to switch. This targeted approach reduces the overall cost of predation and makes the strategy more feasible, ultimately facilitating anticompetitive foreclosure.

Dominant companies can use algorithmic targeting to improve rebate strategies. Standardised rebates often don’t maximise profits across different customer groups, while personalised rebates can be expensive and complicated. Algorithmic targeting makes it easier to design rebates that keep customers from switching, which can block competitors and increase market power.

Tying and bundling let a dominant company use its power in one market to gain an advantage in another. Tying means customers must buy two products together, while bundling offers them as a package. Algorithmic targeting makes these strategies stronger by identifying customers with little price sensitivity and offering them only the bundle, which they prefer over buying separate products from competitors. This helps the company block rivals and increase its market power.

Algorithmic exploitative conduct

Algorithmic excessive pricing can take two forms: monetary and non-monetary. In monetary terms, pricing algorithms set exploitative prices by analysing demand and adjusting dynamically to maximise profits. Non-monetary excessive pricing works differently: instead of charging more money, a company reduces quality through its algorithms. For example, search or recommendation systems might prioritise ads over relevant results, or allocation algorithms might impose stricter data collection requirements.

In both cases, algorithms make these strategies more effective by identifying which customers will tolerate higher prices or lower quality and applying changes automatically. This means customers “pay” not only with money but also through worse service or unfavourable conditions.

A related example is the “consent or pay” case (link). In 2023 Meta introduced a binary “consent or pay” advertising model in an attempt to comply with the newly introduced Digital Markets Act (DMA). Under this model, EU users of Facebook and Instagram had to choose either to consent to the combination of their personal data for personalised advertising or to pay a monthly subscription for an ad-free service. The European Commission found that this model did not comply with the Digital Markets Act because it failed to provide users with the required, specific option to use a service that relied on less of their personal data while remaining otherwise equivalent to the personalised-ads service. Meta’s model also prevented users from freely exercising their right to consent to the combination of their personal data.  

Considerations when using AI

Where infringements involving AI fall within the scope of the prohibition of anti-competitive agreements under Article 101 TFEU or the prohibition of abuse of a dominant position under Article 102 TFEU, the existing EU competition law framework applies. Accordingly, no separate AI-specific legislation is required to address such conduct, as the current provisions already encompass practices in which AI is employed to restrict competition or exploit market dominance.

The Digital Services Act (DSA) focuses on transparency and accountability for online platforms, requiring very large platforms to disclose how algorithms rank, recommend, and moderate content, and to assess risks linked to AI-driven systems. By curbing opaque practices and manipulative designs, it indirectly supports fair competition and complements the DMA. The DMA targets gatekeeper platforms (designated as such by the European Commission) by imposing obligations that prevent self-preferencing, ensure data portability, and maintain fair access for business users.

For further background, please refer to our earlier news update on the DSA (link).

The ACM has published guidelines on consumer protection and pricing. These include rules against misleading consumers through algorithms, explained in its Guidelines for the Protection of the Online Consumer (link). Companies should assess whether their online environment helps consumers make informed choices, not just at checkout, but also whether they would make the same choice without influence techniques.

The ACM also issued a guideline on how prices and comparisons should be shown (link).

Recently, the ACM started a market study on algorithm-driven pricing (link). The goal is to understand how dynamic and personalised pricing, based on data and algorithms, works in practice and what its effects are. For this study, the ACM is focusing on airline ticket prices.

In addition to the existing rules under competition law, the DSA/DMA and ACM guidance, the AI Act – which entered into force on 1 August 2024 – will have significant implications on the way AI is used by companies. The AI Act will be applicable in stages:

  • General provision and bans on prohibited practices apply as of 2 February 2025;
  • General-purpose AI rules including governance apply as of 2 August 2025;
  • Obligations for high-risk systems will apply as of 2 August 2027.

 For further background, please refer to our earlier news update on the AI Act (link).