Artificial intelligence (AI) and algorithmic systems are transforming digital markets, raising novel competition law challenges. In the UK, competition (antitrust) law – primarily the Competition Act 1998 (CA 1998) – applies to AI-driven markets just as it does to traditional ones. However, AI algorithms can create new risks of anti-competitive behavior. In particular, pricing algorithms may collude (whether explicitly through agreements or tacitly through parallel conduct) to fix prices at higher levels, and dominant tech firms may use advanced AI to abuse their market power (for example, by favoring their own services or excluding rivals). This essay analyses how UK competition law addresses these issues. It explores the risks of algorithmic collusion (both explicit and tacit) and the potential abuses of dominance via AI. It also examines the response of the UK’s Competition and Markets Authority (CMA) to these challenges, and compares the UK approach with that of the EU where relevant.
Algorithmic collusion: explicit and tacit
Collusion in competition law refers to coordination between firms to avoid competing, often resulting in higher prices. UK law prohibits explicit collusion (such as price-fixing agreements) under Chapter I of the Competition Act 1998. Specifically, Section 2(1) CA 1998 (the Chapter I prohibition) outlaws “agreements or concerted practices between undertakings” that prevent, restrict or distort competition within the UK. This mirrors Article 101 of the Treaty on the Functioning of the EU (TFEU), which similarly prohibits anti-competitive agreements. Explicit collusion involves a meeting of minds – an actual agreement or at least some form of communication and mutual understanding between competitors to coordinate their behaviour. Price-fixing, market-sharing, or bid-rigging cartels are classic examples of explicit collusion, which is per se illegal.
AI-driven markets introduce new means for companies to collude. Firms increasingly use pricing algorithms to set and adjust prices in real time, based on vast data inputs. These algorithms can facilitate collusion in several ways. First, algorithms make it easier to monitor competitors’ prices and detect any deviation from a coordinated outcome. This enhanced market transparency and rapid detection can stabilise a cartel – algorithms can instantly retaliate against a price cut by undercutting the deviator, thereby removing incentives to cheat on a collusive arrangement. As EU Commissioner Margrethe Vestager noted, algorithms may lead to “more effective and stable cartels” by quickly spotting cheating and punishing it. Second, multiple firms might rely on the same pricing software or common algorithmic platform, effectively using a shared “hub” to coordinate prices (a hub-and-spoke collusion scenario). If competitors delegate pricing to a common algorithm or service, it can amount to a concerted practice because they knowingly substitute independent decision-making with algorithmic coordination. A notable example in the EU is the Eturas case, where travel agencies used a common online booking system that imposed a uniform cap on discounts – the Court of Justice found this could constitute a concerted practice under Article 101 TFEU if the travel agencies were aware of the restriction (Eturas UAB v. Lithuanian Competition Council, Case C-74/14, EU:C:2016:42). Finally, and most controversially, there is the possibility of “autonomous tacit collusion”: sophisticated pricing algorithms might learn to collude without any explicit agreement or direct communication between firms. In other words, purely through independent machine-learning processes, algorithms could reach a stable high-price equilibrium that each firm’s AI recognises as optimal, thus achieving what is effectively tacit price coordination.
Under current law, explicit collusion via algorithms is clearly unlawful, as the firms remain responsible for the algorithms’ actions. The UK’s CMA demonstrated this in its online posters cartel case in 2016. In that case, two online sellers of posters (Trod Ltd and GB Eye Ltd) agreed not to undercut each other’s prices on Amazon Marketplace. They implemented this agreement by configuring automated repricing software to maintain the price alignment. The CMA found this to be a violation of the Chapter I prohibition (analogous to a price-fixing cartel), and imposed fines and director disqualification. The fact that pricing was executed by an algorithm did not shield the firms from liability – as the CMA stated, using software to fix prices is just as illegal as any other means of collusion. Notably, one conspirator received leniency for reporting the cartel, and the other (Trod) also pleaded guilty to a related charge in the US. This case, UK’s first algorithm-driven cartel decision, signaled that competition authorities can tackle explicit algorithmic collusion with existing laws. Similarly, EU authorities have emphasised that companies cannot evade responsibility by “hiding behind a computer program” – if an agreement is formed (even via automated systems), the firms will be held accountable (Vestager, 2017). Indeed, the EU E-Commerce Sector Inquiry (2017) revealed that two-thirds of surveyed online retailers use automated price monitoring, and many adjust prices automatically, prompting regulators to warn that antitrust laws apply fully to pricing algorithms.
The harder question is how to deal with tacit algorithmic collusion – situations where algorithms reach a collusive outcome without an identifiable agreement or human communication. Tacit collusion (also called conscious parallelism) refers to firms aligning their conduct (such as setting similarly high prices) independently but in awareness of each other, without any overt collusion. Competition law traditionally does not prohibit purely tacit collusion, because unilateral parallel behavior is not an “agreement or concerted practice” in the legal sense. As the EU courts put it, each firm is free to “adapt itself intelligently” to rivals’ behavior (Case C-8/08 T-Mobile Netherlands [2009] ECR I-4529, para 31), and mere parallel price changes are lawful so long as there is no anti-competitive contact. This principle holds in UK law as well. Consequently, if algorithms independently learn to coordinate (for example, AI pricing agents repeatedly interact and eventually settle at a supracompetitive price equilibrium), there may be no legal “hook” to attach liability under current antitrust doctrines. Such an outcome is troubling, because consumers could suffer higher prices akin to a cartel, yet no law is clearly broken. Scholars have raised concerns that advanced AI may make tacit collusion more pervasive – essentially “turbo-charging” oligopolistic coordination (Ezrachi and Stucke, 2016). The CMA’s 2021 report on algorithms likewise noted that algorithmic collusion is an “increasingly significant risk” as pricing algorithms grow more complex and widespread.
Importantly, competition authorities are debating how to respond if autonomous collusion emerges. One approach is to use existing tools creatively: authorities can scrutinise whether ostensibly independent algorithmic alignment really involved some facilitating practice or indirect communication. For instance, if companies knowingly deploy algorithms prone to collusion, could that be viewed as an unlawful concerted practice or at least as negligent facilitation? At present, the consensus is that without evidence of communication or a “meeting of minds,” enforcement is difficult. The European Commission has mulled expanding the interpretation of what counts as “communication” under Article 101 – perhaps arguing that if two AI systems effectively signal and respond to each other over time, that could constitute the requisite meeting of minds. However, the prevailing case law (e.g. Suiker Unie (Case 40/73) and Woodpulp II (Ahlström v Commission, 1993)) draws a line between concertation and mere parallel conduct, and “intelligent adaptation” remains outside Article 101’s reach. Another mooted solution is legislative change – some commentators suggest that if tacit algorithmic collusion becomes a real problem, competition law might need reform (for example, per se rules against certain computer-mediated parallel pricing, or obligations for companies to design algorithms that comply with antitrust norms). As of 2025, neither the UK nor the EU has gone so far. Authorities instead focus on vigilance and deterrence: making clear that any form of coordinated behavior facilitated by algorithms that can be caught by current law will be pursued. The CMA has stated that if algorithms are “designed or deployed specifically to limit competition… or to exploit a dominant position, competition law should in principle apply to those who design or deploy those algorithms”. Likewise, the European Commission warns that companies using pricing software in anti-competitive ways will face enforcement, and that widespread use of algorithms is “no excuse” for cartel behaviour. In summary, explicit algorithm-driven collusion falls squarely under UK competition law, while tacit collusion by AI poses a grey-area challenge. The law does not currently ban purely tacit coordination, but regulators are researching and monitoring this risk closely. They may seek novel theories or policy tools if autonomous collusion starts causing serious consumer harm. For now, firms are on notice that adopting AI pricing tools does not immunise them: any collusive outcomes that involve agreement, facilitation, or knowing coordination will be treated as serious infringements (with penalties including fines up to 10% of turnover, and even criminal charges for individuals in hardcore cartels under the Enterprise Act 2002).
Abuse of dominance by AI-powered firms
Alongside collusion concerns, competition law in AI-driven markets must contend with potential abuse of dominance by powerful tech firms. Under Chapter II of the Competition Act 1998 (and Article 102 TFEU in EU law), it is illegal for a firm with a dominant market position to abuse that dominance, for example by excluding competitors or exploiting consumers. Dominant tech companies often control important digital ecosystems and deploy advanced AI algorithms in their operations. This creates opportunities to leverage algorithmic tools to entrench market power or to engage in anti-competitive conduct.
One major risk is algorithmic self-preferencing by dominant digital platforms. Self-preferencing refers to a dominant platform favoring its own products or services over those of third parties on its platform (or in its search results, app store, etc.). AI systems and algorithms can enable subtle but effective self-preferencing. For instance, a dominant online marketplace or search engine can tweak its ranking algorithms to systematically surface its own offerings more prominently than rivals’, or apply quality metrics that disadvantage competitors while exempting its own services. In traditional terms, this can be an abuse of dominance: it is akin to a vertically integrated monopolist giving itself an unfair advantage. The European Commission’s Google Search (Shopping) case (2017) is illustrative. Google held a dominant position in general online search. The Commission found that Google abused that dominance by algorithmically promoting its own comparison shopping service at the top of search results, while demoting rival comparison shopping sites. Google’s search algorithms (notably the “Panda” update) down-ranked sites deemed to have poor content – ostensibly a neutral rule – but Google exempted its own Google Shopping from these algorithms, ensuring Google Shopping always appeared prominently. This self-preferencing was not based on merit, and it substantially foreclosed competition in comparison shopping. The Commission fined Google €2.42 billion for this abuse (Case AT.39740, June 2017). This landmark EU case, upheld on appeal (General Court, 2021), underscores that manipulating AI-driven services to favor oneself can breach competition law. Although this was an EU enforcement, the UK (which at the time was in the EU) applies the same principles under Chapter II CA 1998. Indeed, the CMA has explicitly highlighted Google’s conduct in this case as a paradigmatic example of algorithmic self-preferencing.
Dominant e-commerce platforms like Amazon have faced similar scrutiny. Amazon operates a marketplace while also selling its own products (including private labels) on that marketplace – a structurally conflictual position. It can use algorithms to its advantage in two main ways: using data and controlling rankings/access. Amazon’s vast troves of data on third-party sellers could be misused by its AI systems to inform Amazon’s own retail decisions – for example, identifying profitable products and entering those markets, or adjusting Amazon’s prices based on sellers’ data. Such misuse of competitor data by a dominant platform can be an anti-competitive exploitation, as it undermines rivals using the platform. Additionally, Amazon’s algorithms decide which seller wins the prominent “Buy Box” for a product (the default purchase option), and which sellers qualify for the Prime badge (which influences consumer choice). There have been concerns that Amazon’s algorithms prefer its own retail offers or sellers who use Amazon’s logistics, thereby biasing the platform in Amazon’s favour. The CMA investigated Amazon under Chapter II CA 1998 on exactly these issues. In 2022–2023, the CMA examined whether Amazon’s conduct in its UK Marketplace was anti-competitive – specifically, whether Amazon was unfairly using third-party seller data and giving undue preference to its own products or to certain sellers in the Buy Box and Prime eligibility. To resolve the CMA’s concerns, Amazon agreed to a set of binding commitments in late 2023. These CMA commitments require Amazon to stop using non-public data from Marketplace sellers to advantage its retail business, to ensure equal treatment for all sellers’ offers in Buy Box selection (preventing any inherent bias favoring Amazon or its affiliates), and to allow sellers in Prime to use independent logistics providers (decoupling the tie between Prime listing and Amazon’s own delivery network). By accepting these commitments, the CMA addressed the potential exclusionary abuses in Amazon’s algorithms – effectively forcing Amazon to alter its algorithmic criteria to be fair and non-discriminatory. This UK action paralleled the European Commission’s investigation into Amazon, which led to similar commitments at the EU level in 2022 (under EU Article 102 and the new Digital Markets Act, discussed below). These examples show that competition authorities are closely watching how dominant firms use AI and data: leveraging an algorithm to favor one’s own services or to undermine rivals can amount to abuse of dominance.
Another category of AI-related dominance abuse is algorithmic exclusion or exploitation through personalised practices. Dominant digital firms may use AI to micro-segment users and engage in personalised pricing or personalised service terms. If a firm has market power, extreme price discrimination enabled by AI could be deemed an exploitative abuse – for example, charging higher prices to certain customers (perhaps identified as less price-sensitive by an AI) that bear no relation to cost. While competition law in the UK/EU seldom intervenes against high pricing unless it is “excessive and unfair” (a high bar), the use of AI might bring new scrutiny to pricing practices, especially if vulnerable consumers are targeted (this also crosses into consumer protection law). The CMA’s 2021 report flagged that personalisation can harm consumers by exploiting their biases or information asymmetries, and that such practices, when done by powerful firms, might reduce overall welfare. However, thus far, abusive pricing algorithms have not been the focus of enforcement in the UK. A more concrete concern is predatory pricing algorithms. A dominant firm could deploy AI to algorithmically set predatory low prices to drive out competitors (for example, using dynamic pricing that automatically matches any competitor’s lower price, even below cost). Predation is an established abuse in principle (as defined in cases like AKZO Chemie (1991) in the EU), but proving an AI-driven predatory strategy can be complex. Still, the legal standard remains: if a dominant firm’s algorithm prices below an appropriate measure of cost with the intent to eliminate a rival, it violates Chapter II CA 1998. The twist is that AI might engage in such strategy without explicit human direction – competition authorities would then have to attribute the AI’s conduct to the firm, which they are likely to do given that firms are responsible for their tools. As the CMA noted, firms cannot escape liability for their algorithms’ actions; a dominant firm tuning an algorithm that systematically drives out competitors could face enforcement.
Data and interoperability are also emerging as competition issues in AI markets. Dominant AI companies often control key inputs like data or computing power. A dominant firm might refuse to give competitors access to essential data or interfaces, which could be an abuse (akin to refusal to supply or denial of access to an essential facility). For example, if an AI platform like a dominant operating system or cloud service denies interoperability or access in order to favor its own complementary AI products, that could be exclusionary. Competition authorities are increasingly attentive to such scenarios: the CMA and its global counterparts have pointed out the risk of incumbents using control of data or infrastructure to foreclose nascent AI competitors. One live example is the investigation (by the European Commission, with CMA interest) into arrangements between Microsoft and OpenAI. Microsoft, already powerful in cloud services, invested in OpenAI (creator of ChatGPT) and made Azure the exclusive cloud provider for OpenAI. Regulators are probing whether this partnership could entrench Microsoft’s dominance (tying cloud and AI services, possibly locking out other cloud competitors or giving Microsoft preferential access to OpenAI’s technology). While this is still under scrutiny, it exemplifies how extending dominance into AI markets via exclusive agreements could raise competition concerns (potentially analysed as abuse of dominance or under merger control rules).
In summary, UK competition law is equipped (in principle) to tackle AI-related abuses by dominant firms. The key is applying traditional abuse categories to the new context: self-preferencing, exclusive dealing or tying, refusal to deal, predatory pricing, and exploitative practices all can occur through the medium of algorithms. The CMA has already taken action where evidence allows – such as the Amazon case addressing preferential algorithms – and it continues to investigate digital giants like Google, Apple, Meta, and Amazon on various fronts (e.g. ongoing studies into mobile ecosystems and cloud services). The EU approach to dominance in digital markets has been similarly active, with high-profile cases (Google Search, Google Android, Apple App Store, Amazon Marketplace) enforcing Article 102, and additionally, the EU has introduced a new regulatory regime (the Digital Markets Act 2022) to impose upfront obligations on major “gatekeeper” platforms.
The CMA’s response and the EU’s approach
The Competition and Markets Authority is proactively responding to the challenges of AI-driven markets through enforcement, market studies, and advocacy for new regulatory tools. On enforcement, as discussed, the CMA has pursued cases involving algorithms – notably the 2016 poster sellers cartel (enforcing the Chapter I prohibition) and the Amazon investigation (Chapter II abuse, resolved with commitments in 2023). These cases show the CMA is willing to apply existing laws to algorithm-facilitated misconduct and achieve remedies (fines, commitments) that change how algorithms operate (e.g. Amazon’s algorithms must now treat sellers fairly due to CMA’s intervention).
Beyond case enforcement, the CMA has been studying digital markets to understand how algorithms affect competition. It conducted a market study into Online Platforms and Digital Advertising (2019–2020), which examined the dominance of platforms like Google and Facebook and the role of algorithms in search, advertising, and news feed rankings. That study’s findings (published in 2020) reinforced concerns about self-preferencing and leveraging of data, and it recommended a new regulatory regime for digital markets. Following this, the UK government (with CMA input) proposed and has now enacted the Digital Markets, Competition and Consumers Act 2024 (DMCC Act). This Act establishes a Digital Markets Unit (DMU) within the CMA and empowers it to designate large tech firms with “Strategic Market Status” (SMS) in certain digital activities. Those SMS firms will be subject to conduct requirements aimed at preventing anti-competitive practices – for example, the DMU can enforce rules against self-preferencing, data misuse, or restrictive access imposed by an SMS firm (much like the EU’s Digital Markets Act bans such conduct by gatekeepers). The DMCC Act thus represents a significant regulatory response to AI-era market power. It allows the CMA to impose pro-competitive interventions (including interoperability mandates or data access orders) and hefty penalties for non-compliance. Essentially, the UK is augmenting its competition law toolkit: traditional ex post enforcement (after a violation) is being complemented by ex ante rules for the most powerful digital players, to curb unfair algorithms before they inflict lasting damage on competition. This aligns with the EU’s approach, where the Digital Markets Act (DMA) came into force in 2023 to regulate big tech platforms’ conduct (the DMA explicitly prohibits certain behaviors like self-preferencing and tying by designated gatekeepers, reflecting lessons from cases like Google Shopping).
The CMA is also engaging in research and guidance specific to algorithms. The Algorithms: How they can reduce competition and harm consumers (CMA, 2021) report was a comprehensive analysis of algorithmic harms. It outlined theories of harm for collusion, exclusion, and consumer exploitation via algorithms. It highlighted scenarios such as algorithmic price-fixing, ranking algorithms that exclude competitors, and personalisation strategies that could be unfair. By publishing this paper along with a Call for Information, the CMA signaled to industry that it is watching algorithmic conduct closely and developing the capability to identify problematic systems. The report also discussed tools for regulators to audit and investigate algorithms, even when the algorithms are complex or opaque. This reflects a broader regulatory trend: competition authorities are upgrading their tech expertise and enforcement tools (for example, developing algorithm screening systems to detect cartels, or requiring algorithmic transparency in investigations). The CMA, for instance, has considered the feasibility of an “algorithm audit” power – the ability to scrutinise a company’s code or AI outputs in an inquiry.
Furthermore, the CMA has been cooperating internationally on AI and competition issues. It has participated in OECD discussions and joint statements with other authorities. In 2017, the UK submitted a note to the OECD on Algorithms and Collusion, sharing insights from the UK’s early case experience and acknowledging the challenges algorithms pose. More recently, in 2023, the CMA’s CEO joined a Joint Statement on Competition in Generative AI alongside the EU and U.S. antitrust heads. This joint statement identifies emerging risks such as a few firms controlling key AI inputs (like cloud computing power or foundational models) and incumbent tech giants extending their dominance into AI markets. The agencies committed to remain vigilant and to coordinate where possible in addressing anti-competitive conduct in AI development. Such coordination is vital because digital markets are borderless – if algorithms collude or a dominant AI platform forecloses competition, those effects often cross jurisdictions. The CMA’s international engagement ensures the UK approach is informed by global experience and that companies face consistent pressure across major markets.
Compared to the EU approach, the UK’s substantive competition law on collusion and dominance remains closely aligned (given the shared heritage of CA 1998 with EU competition principles). Where we see divergence is in the regulatory frameworks post-Brexit: the UK’s DMU regime under the DMCC Act versus the EU’s DMA. Both aim to tackle issues like self-preferencing and unfair access terms by tech gatekeepers, but their implementation details differ (the UK regime allows more tailor-made conduct requirements per firm, while the EU DMA imposes a uniform set of obligations). In enforcement, the European Commission has brought multiple landmark cases involving algorithms and dominance (beyond Google Shopping, cases like Google Android (2018) – tying of Google’s search and browser on mobile OS – and investigations into Amazon’s data use were pursued). The UK CMA, during the EU period, contributed to some of those and has since launched its own probes (like Amazon, Apple’s AppStore policies, Google’s Privacy Sandbox proposal in online ads which was resolved with commitments in 2022). The EU also remains cautious yet concerned on algorithmic tacit collusion – its 2017 OECD note and Vestager’s speeches reiterate that while tacit collusion is not illegal, they are studying if algorithms change the game. The European Commission has not (as of 2025) attempted to prosecute a pure tacit algorithmic collusion case, nor has the CMA. Both seem to prefer advocating for “compliance by design” – encouraging firms to build algorithms that won’t collude autonomously, and warning that if firms knowingly facilitate collusion through AI, they will be liable.
In essence, the UK and EU are taking a broadly similar stance: apply existing antitrust laws to algorithm-driven conduct, step up monitoring of markets with algorithms, and introduce ex ante measures for digital giants to prevent the most egregious forms of abuse that have been identified (like self-preferencing and data hoarding). The CMA’s advantage post-Brexit is agility – it can set up its own regime (the DMU) and take swift action domestically, while still cooperating internationally. The EU’s advantage is its scale – DMA and antitrust enforcement cover the whole single market, which for global tech firms is crucial. For companies operating in AI-driven markets, these developments mean they face increased scrutiny on both fronts. Practices that may have flown under the radar a decade ago – e.g. subtle algorithmic biases or silent price coordination – are now squarely on regulators’ agendas.
Conclusion
AI-driven markets offer enormous benefits in innovation and efficiency, but they also pose significant challenges for competition law. The UK’s competition regime is actively adapting to ensure that AI and algorithms do not become vectors for anti-competitive conduct. Algorithmic collusion – whether explicit agreements implemented through software or more nebulous tacit coordination by AI agents – is a real concern. UK law, through the Competition Act 1998, clearly prohibits explicit price-fixing and market-sharing, and this extends to collusion carried out via algorithms. The CMA’s enforcement in the online poster cartel case shows that traditional tools can reach new digital misconduct. Tacit algorithmic collusion, on the other hand, remains a grey area: it highlights a gap between economic harm and legal culpability that authorities are not fully equipped to bridge without further evidence or legal evolution. Both the CMA and European Commission are studying this area closely, weighing options such as enhanced detection, creative enforcement (e.g. treating algorithm designers as participants if they enable collusion), or even future regulatory intervention if necessary.
Regarding abuse of dominance, AI has become a double-edged sword. Dominant tech firms can leverage AI algorithms to strengthen their positions – through self-preferencing, squeezing competitors, tying ecosystems together, or extracting more value from consumers. The CMA and EU have shown willingness to tackle these abuses: from Google’s self-preferencing to Amazon’s use of data and ranking algorithms, competition authorities are demanding changes to ensure a level playing field. The UK is complementing case-by-case enforcement with the new DMCC Act regime, aiming to prevent certain abuses by imposing upfront obligations on powerful digital firms. This proactive regulatory stance mirrors the EU’s DMA and indicates a convergence towards stricter oversight of digital markets on both sides of the Channel.
In British competition law practice, there is an emphasis on using concise, effects-based analysis – the CMA and courts will look at how an AI-driven conduct affects competition and consumers. If the effect is to reduce competition (through collusion or foreclosure), the behavior will attract intervention, regardless of the technology involved. At the same time, regulators must be careful not to stifle innovation. Both the CMA and EU authorities acknowledge that algorithms and AI can be pro-competitive (optimising processes, reducing costs) and that overbroad intervention could chill beneficial innovation. Thus, the challenge is striking the right balance – being vigilant against anti-competitive risks without undermining the efficiencies that AI can bring.
The CMA’s current strategy, which likely will continue, is an informed, targeted approach: increase their own technical capabilities, investigate potential abuses early, and impose remedies or rules when there is clear evidence of harm. The coordinated international approach (with the EU and US) on AI also suggests future convergence in antitrust enforcement standards for AI markets. Businesses deploying AI in the UK must ensure compliance with competition law – for example, embedding antitrust compliance into algorithm design (sometimes called “algorithmic compliance by design”). Firms with market power should exercise caution that their AI-driven practices (pricing, recommendations, data usage) do not cross the line into exclusionary conduct.
In conclusion, UK competition law fully applies to AI-driven markets, even if the tools of collusion or exclusion have evolved. The legal principles of prohibiting agreements that restrict competition and abuses of dominance are technology-neutral. However, the application of these principles is being refined through cases, new regulations, and policy developments to address the unique features of algorithms – such as their speed, opacity, and autonomy. The CMA’s active role, combined with insights from EU developments, indicates a robust framework is being built to keep AI markets competitive. As AI continues to advance, the CMA and other regulators will likely further refine their approaches, ensuring that innovation can thrive hand in hand with competitive fairness rather than at its expense.
References (Harvard style)
- Autorité de la Concurrence & Bundeskartellamt (2019) Algorithms and Competition, joint report. Available at: https://www.bundeskartellamt.de/algorithms-and-competition (Accessed: 5 May 2025).
- CMA (Competition and Markets Authority) (2016a) Online sales of posters and frames (Case 50223) – Infringement Decision (non-confidential version). CMA, 30 September 2016. Available at: https://assets.publishing.service.gov.uk/media/57eece2240f0b606e7000020/Online_Sales_decision.pdf (Accessed: 2 May 2025).
- CMA (2016b) Press Release: CMA warns online sellers about price-fixing. 7 November 2016. Available at: https://www.gov.uk/government/news/cma-warns-online-sellers-about-price-fixing (Accessed: 1 May 2025).
- CMA (2021) Algorithms: How they can reduce competition and harm consumers. CMA Research Paper, 19 January 2021. Available at: https://www.gov.uk/government/publications/algorithms-how-they-can-reduce-competition-and-harm-consumers (Accessed: 30 April 2025).
- CMA (2023) Decision accepting commitments: Amazon Marketplace. 3 November 2023. Available at: https://www.gov.uk/cma-cases/investigation-into-amazons-marketplace (Accessed: 2 May 2025).
- CMA (2024) Digital Markets, Competition and Consumers Act 2024 (UK) – Legislation and CMA Guidance. (Chapter 2 of the Act establishes the Digital Markets regime). Royal Assent 24 May 2024.
- Competition Act 1998 (UK), c.41. (Primary legislation establishing the Chapter I prohibition on anti-competitive agreements and Chapter II prohibition on abuse of dominance).
- Ezrachi, A. and Stucke, M. E. (2016) Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Cambridge, MA: Harvard University Press.
- European Commission (2017) Press Release: Antitrust – Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service. Brussels, 27 June 2017 (IP/17/1784).
- European Commission (2020) Final report on the E-commerce Sector Inquiry. SWD(2017) 154, and Commission Staff Working Document accompanying the report. (Provides statistics on use of pricing algorithms by online retailers).
- European Union (2017) Algorithms and Collusion – Note from the European Union. OECD Competition Committee Discussion Paper, June 2017. Available at: https://one.oecd.org/document/DAF/COMP/WD(2017)12/en/pdf.
- OECD (2017) Algorithms and Collusion: Competition Policy in the Digital Age. OECD Secretariat Background Note (DAF/COMP(2017)4).
- T-Mobile Netherlands (Case C-8/08) [2009] ECR I-4529. (EU Court of Justice judgment, stating that Article 101 TFEU does not catch mere intelligent adaptation to competitors’ conduct absent an agreement or concerted practice).
- Vestager, M. (2017) ‘Algorithms and Competition’ Speech at Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017. (Available via European Commission website). Key quote: “companies can’t escape responsibility for collusion by hiding behind a computer program.”
- Woodpulp Cartel (Ahlström v Commission) (Cases C-89/85 etc.) [1993] ECR I-1307. (EU case distinguishing parallel conduct from concertation).
- Eturas (Case C-74/14 Eturas UAB v Lietuvos konkurencijos taryba) [2016] 4 CMLR 14. (EU Court of Justice ruling that participants in a common online system could be liable for a concerted practice if they were aware of and accepted a system-imposed restriction on discounts).