Algorithms perform a variety of functions within digital ecosystems, many of which have the potential to impact the effective operation of competition in these spaces. Although certain market problems that involve the use of algorithms may take us to the frontiers of contemporary competition law, competition enforcers have already had occasion to grapple with the practical implications of their widespread use in various contexts. This article considers what might be termed the more mundane side of competition enforcement in algorithm-heavy environments, considering areas where algorithm-driven concerns have been accommodated within the established EU competition rules, or at least, have required mere evolution rather than revolution in their application.

By Niamh Dunne1

 

I. INTRODUCTION

That the use of algorithms is pervasive across the digital economy is something of a trite observation. Algorithms perform a broad variety of functions within digital ecosystems, many of which have the potential to impact upon — whether for good or bad — the effective operation of competition in these spaces. It is equally trite, if not untrue, to portray algorithms as presenting unique challenges for competition enforcement, in part due to the mystery that shrouds what is often complex proprietary technology, in part to fears that such technology may sooner or later develop an (anticompetitive) mind of its own. Yet, even if certain market problems that involve the use of algorithms take us to the frontiers of contemporary competition law, algorithms are by now part of the furniture of many modern markets. Competition enforcers, accordingly, have already had occasion to grapple with the practical implications of their widespread use in various contexts. This short article considers what might be termed the more mundane side of competition enforcement in algorithm-heavy environments, considering areas where algorithm-driven concerns have been accommodated within the established rules, or at least, have required mere evolution rather than revolution in their application. These examples illustrate, once again, the inherent flexibility of competition law; but they also serve as an important reminder that the best should not be the enemy of the good, even when it comes to the increasingly polarized question of how to regulate competition in the digital economy.

The specter of algorithmic “collusion by robots” is one of the more challenging, and certainly most evocative, issues facing competition law today. Questions of whether such behavior is likely to arise, whether the current legal framework is adequate to address the resulting harm, and what form any necessary amendments might take, have been discussed enthusiastically and near-exhaustively in the existing literature. Algorithmic collusion is so attention-grabbing, arguably, because it combines a perfect storm of the “supreme evil of antitrust” (i.e. horizontal collusion) with a newly-emerged and apparently intractable limit to the coverage of the antitrust rules. The issue is knotty, thought-provoking, and — having largely failed to materialize in practice thus far — lends itself to original thinking and argument.

Yet away from the glamorous but as-yet unrealized problem of algorithmic collusion, the use — and misuse — of algorithms by firms is already having an impact on competition law and policy. That is, although algorithmic collusion tends to grab the attention of policymakers and scholars, in practice the sorts of algorithm-driven issues faced by enforcers to date have been more quotidian and concrete. Moreover, while the use of algorithms may complicate the factual background to a case, the existing examples discussed in this piece suggest that the problems encountered thus far have proven capable of accommodation, to a greater or lesser extent, within the framework of the existing rules.

The focus of this piece is recent enforcement activity of the European Commission under Articles 101 and 102 of the Treaty on the Functioning of the European Union, where the competitive impact of the use of algorithms by market actors has been considered and, in some instances, sanctioned. Three distinct categories of cases are examined: where the use of algorithms creates the opportunity for strategic behavior designed to take advantage of their otherwise unproblematic operation to anticompetitive effect; where the market-wide use of algorithms exacerbates the harm caused by anticompetitive conduct; and where the condemned behavior comprises a decision to depart from the ordinary operation of an algorithm in certain circumstances. For each, we consider how the use or operation of the algorithm fits into the theory of harm, and the extent to which the innovative digital context requires concomitant innovation in the application of the relevant legal rules. The piece concludes by considering some common themes that arise from this emerging enforcement practice.

 

II. ONLINE ADVERTISING: ALGORITHMS AS AN ARTEFACT OF THE MARKET

We begin with the most straightforward situation, namely where the algorithm is merely an artefact of the relevant market, rather than either exacerbating or comprising the theory of harm. The fact that algorithms are central to the functioning of a market does not mean, of course, that those algorithms are necessarily tied up with the alleged anticompetitive conduct. In this first category of cases, it is important to understand how the particular algorithm operates principally in order to understand how it creates opportunities for strategic exclusionary behavior, meaning that otherwise ambiguous or unexplainable practices take on a particular anticompetitive gloss in the circumstances.

Take, for instance, several recent cases involving online advertising markets, specifically various programs offered by Google. Both AdWords (aimed at advertisers) and AdSense (aimed at web publishers) utilize complex algorithms to place paid-for advertisements within Google’s own search results and third-party websites. These algorithms thus determine, to a large extent, how (and how well) online advertising markets operate: whether advertisers reach appropriate customers, whether those customers are likely to engage with the ads presented to them, and what sorts of revenue are earned by publishers and Google itself.

The operation of these programs has been of pivotal importance in two recent infringement decisions. In Guess, the Commission found a “by object” infringement of Article 101 where a clothing manufacturer banned its authorized retailers from using or bidding on its brand names and trademarks as keywords in AdWords.2 In Google Search (AdSense), the Commission found an infringement of Article 102 from exclusive dealing, notably a “relaxed exclusivity” policy which required publishers to reserve the most profitable space on their search results pages for Google’s adverts.3 In both instances, the anticompetitive behavior only made sense because of how the underlying algorithm operated. Guess hinged upon the fact that the price paid by advertisers was dependent, inter alia, upon the demand for the relevant keyword in an auction process. Thus, Guess could reduce its own downstream e-commerce advertising costs by reducing the competition it faced in keywords auctions. The relaxed exclusivity policy in AdSense, conversely, took advantage of the fact that consumer attention online is strongly biased towards certain display positions — in short, we are loath to scroll down — meaning that advertisers are willing to pay much more for certain positions compared to others. This again was reflected in how the AdSense algorithm operated. Thus, the performance of the algorithm in both cases created the opportunity for exploitation: the defendant’s practices were designed to effect an anticompetitive outcome taking into account how the market worked, including the functioning of the relevant algorithm on a day-to-day basis.

Yet, arguably, the algorithmic context was neither dispositive nor, ultimately, particularly interesting in either case, besides presenting a distinctive market feature to be manipulated to anticompetitive effect. Both types of anticompetitive behavior could, perhaps just as easily, have arisen in a brick-and-mortar environment, arguably to equally detrimental effect. Guess, for instance, could alternatively have taken the form of an explicit ban on the use of television advertising by authorized retailers, or in specific print publications. The facts of AdSense might be analogized to a dominant outdoor advertising company entering into an agreement for bus-stop advertising with a local authority, where the authority is permitted to use rival suppliers for bus-stops located on minor roads but must use the dominant firm for stops on major routes. In each instance, once it is understood how the algorithm operates, and thus how it conditions competition within the particular marketplace, the fact that we are dealing with complex digital technology becomes largely irrelevant. That is, while the algorithm may create the opportunity for exploitation on the particular facts, its use or abuse neither comprises nor augments the substance of the theory of harm.

 

III. RESALE PRICE MAINTENANCE: ALGORITHMS AS AGGRAVATORS OF HARM

By contrast, a second category of recent enforcement practice involves situations where the use of algorithms renders the anticompetitive impact of restrictive conduct both more serious and more durable in specific market circumstances. Our examples here again arise from the use of vertical restraints in the e-commerce sector, specifically resale price maintenance (“RPM”) practices that aim to soften the vigorous price competition that has been facilitated by the emergence of online retailing.

The growth of e-commerce has resulted in a well-documented step-change in the type and frequency of the use of vertical restraints by manufacturers in their distribution policies, which has prompted a renewed interest in their treatment by antitrust authorities, most notably the European Commission.4 In 2018, the Commission took five infringement decisions prohibiting RPM practices in the online sphere.5 In each instance, it relied upon the well-established, but also much-maligned, “by object” characterization of fixed or minimum RPM under Article 101(1), trotting out thirty-year-old precedent to support its automatic condemnation of the use of these practices in context of business models and market structures that did not exist three decades ago.6

What is arguably most remarkable is the manner in which the Commission anchored its decidedly old-school skepticism of vertical price-fixing within a very modern market context, specifically by reference to the recurrent use of algorithms by economic actors in the e-commerce sphere. On the one hand, the reflexive “by object” prohibition of RPM is hard to square with the nominal movement towards a “more economic approach” to EU competition enforcement. There is no hint, in these decisions, of the development in Leegin a decade previously, whereby an admittedly divided U.S. Supreme Court shifted the treatment of RPM from the per se illegal to the “rule of reason” assessment category under §1 of the Sherman Act.7 But nor is there any hint of the Court of Justice judgment in Maxima Latvija from 2015, where the Court drew a clear distinction between the treatment of horizontal and vertical restrictions under Article 101(1).8 This was a distinction notably missing from the original Binon jurisprudence on RPM, with its defiantly literal, undifferentiated reading of the Article 101(1)(a) prohibition.9 The Commission’s approach becomes even more questionable when viewed in light of recent case law that gives central important to the legal and economic context of any restraint, and in particular, the proposition that restraints with a plausible efficiency rationale are unsuitable for “by object” condemnation,10 even if they may be found to have the effect of restricting competition in practice.11

Yet, despite these reservations, there is one dimension to this recent enforcement activity that may justify the application of this older, arguably somewhat dubious precedent in the digital context. Namely, as the Commission recognized in each of its RPM decisions, the recurrent use of algorithms to set and monitor prices in the e-commerce sphere may significantly increase the detrimental market-wide impact of individual RPM policies, thus buttressing the contention that such practices are harmful to competition by their very nature. There are two ways in which the use of algorithms feeds into the antitrust assessment in this context.

First, as the Commission explicitly noted in its Asus decision,12 manufacturers may use software monitoring tools to scrutinize the pricing practices of online retailers, thus enabling the detection of lower-than-permitted retail prices both more rapidly and more systematically. Accordingly, RPM practices are potentially more problematic in the e-commerce sphere because, through the use of algorithms, they can be enforced more effectively, and thus to wider, more detrimental effect.

Second, the widespread use of price-setting algorithms by online retailers potentially reinforces the restrictive effect of individual RPM practices, resulting in a market-wide softening of price competition. This point was discussed most extensively in the Pioneer decision.13 The idea is as follows. Online retailers, typically, use software programs to track the prices charged by competing online retailers, which then automatically adjust their own prices downwards to match the lowest available online. In such circumstances, if one retailer “cheats on,” i.e. prices below, the agreed resale price, this can have market-wide implications, as the algorithms deployed by its competitors each adjust those retailers’ prices downwards to meet the competitive challenge: what the Commission terms “online price erosion.” Conversely, where a manufacturer successfully eliminates price-cutting by a single retailer, this again can have market-wide implications of a more negative variety, as those rival algorithms adjust the competitors’ prices upwards to match the new, higher market norm. Thus, it becomes more important, but also more effective, to enforce resale price policies against individual low price retailers: in one fell swoop, a manufacturer may achieve an across-the-board increase in online prices for its product.

Taken together, therefore, the use of price-tracking and price-setting algorithms by manufacturers and retailers has the effect of reinforcing the efficacy of RPM policies in online markets. The assumption within the recent Commission practice, moreover, is that by multiplying the impact of pricing interventions, and so avoiding widescale online price erosion, the use of algorithms serves to amplify the competition detriment associated with such policies—and thus to confirm their inherently restrictive “object” for the purposes of Article 101(1). RPM, as a theory of antitrust harm, has been around a long time, and certainly does not require a high-tech context to be implemented successfully or to generate anticompetitive market impact. Indeed, notably, in each of the recent infringement decisions, much of the day-to-day business of “enforcing” the RPM policies involved bilateral personal contacts between staff of the relevant manufacturer and retailer. Equally, however, even if RPM has long been established as a “by object” restriction, and one moreover which the Commission seems reluctant to accept might be exempted under Article 101(3),14 EU-level enforcement against vertical restraints has not been a priority in the modernized era. It thus appears to be the specific e-commerce context, and in particular the recurrent use of algorithms to monitor and manage competition, which has breathed renewed policy urgency into this old and not-uncontentious antitrust concern. In this instance, accordingly, while the algorithmic context does not create the competition problem, it serves to exacerbate its impact and thus tip the balance in favor of enforcement.

 

IV. SELF-PREFERENCING: ALGORITHMS AS EXPRESSION OF “NORMAL COMPETITION ON THE MERITS”

This brings us to our third and final category, namely where the operation of an algorithm is considered to delimit the parameters of “normal” competition within a market, so that intentional deviation from its ordinary functioning may be construed as abusive conduct.

The paradigmatic case is the Google Search (Shopping) infringement decision.15 This revolved around claimed efforts by Google to improve the performance of its own comparison shopping product — formerly known as Froogle, later renamed Google Product Search — by, in essence, giving it a leg-up within its general search engine results. The impugned behavior consisted of two interlinked modifications to the “organic” general search algorithm: tweaks to ensure that Product Search would receive more prominent placement than the objective quality of the service might otherwise deliver, and the application of “adjustment algorithms” to competing comparison shopping products which had the effect of demoting those competitors. This behavior, coupled with evidence that the practice resulted in a diversion of traffic away from competing websites towards Google’s own product, supported the Commission’s finding that the conduct amounted to an abuse of dominance, contrary to Article 102.

A number of aspects are noteworthy. The concept of abuse refers to the behavior of dominant undertakings, and in particular, hinges on the nebulous notion of “recourse to methods different from those which condition normal competition on the merits.”16 In this instance, the condemned behavior consisted of allegedly unwarranted interferences in the ordinary operation of Google’s general search algorithm, thus suggesting that the latter was understood to represent the broad parameters of the functioning of “normal competition” in that context. The Shopping decision explicitly steered clear of declaring access to Google’s general search engine to be “indispensable” within the meaning of the jurisprudence on refusal to deal, however, distinguishing the latter on somewhat unconvincing procedural grounds.17 There is, therefore, an inherent tension at play. On the one hand, absent any valid claim of objective necessity to gain access to Google’s search engine, it might be argued that the firm should be permitted to develop its search algorithm in whatever manner it sees fit. Second-guessing the operation and development of Google’s complex proprietary technology seems an ill-suited task for competition authorities, particularly in light of its lack of “essential facility” status. On the other hand, the infringement decision evinces a clear skepticism of the business choice to diverge from the ordinary operation of the search algorithm in that instance: if Google’s search engine is of such superior quality that it can earn and sustain a position of super-dominance in most EEA national markets for a decade-long period, why depart from it now, if not for exclusionary purposes?

Instead of proceeding by analogy with refusal to deal, the Commission articulated a standalone theory of harm, which has come to be known as the rule against “self-preferencing” by dominant digital platforms. Self-preferencing can be defined as “giving preferential treatment to one’s own products or services when they are in competition with products and services provided by other entities using the platform.”18 Although the Commission endeavored to secure the rule against self-preferencing in earlier case law, subsequent competition policy developments suggest that it is intrinsically tied to its digital context. More specifically, the theory is linked to the so-called “rule-setting function” of certain platforms that provide digital intermediation services, and which thus create and control the rules and institutions through which their users interact.19 For a general search engine like Google, this coincides with the design of the ranking algorithm.20 Thus, in designing and implementing the algorithms that, in effect, comprise the substantive core of its search product, it has been suggested that dominant firms like Google should have a pro-active, quasi-regulatory duty to ensure that competition on their platform is “fair, unbiased, and pro-users.”21 This, arguably, represents a further expansion of the well-established albeit fairly fuzzy “special responsibility” concept specifically in the digital context.22 For our purposes, when it comes to future algorithm development, it means that dominant platforms might be expected to develop their technology with an eye to ensuring equal opportunities for all, and not merely some more objective (but potentially exclusionary) conception of the “best quality” outcome.

So far, so innovative. Yet it was contended at the outset of this article that the examples to be considered largely fit within the confines of the established antitrust rules: evolution not revolution, and so forth. The approach taken by the Commission in Shopping was that a rule against self-preferencing was absolutely nothing new; that, instead, “conduct consisting in the use of a dominant position on one market to extend that dominant position to one or more adjacent markets…constitutes a well-established, independent, form of abuse falling outside the scope of competition on the merits.”23 This contention is contentious, to put it mildly.24 A more cynical reading of the Commission’s reticence to recognize its inventiveness here might highlight its desire to impose a then record-breaking fine of €2.4 billion, a feat that would be more difficult if it was obliged to give significant discounts for novelty. Yet even if the claim that self-preferencing was already well-established as a freestanding theory of harm rings hollow, it nonetheless constitutes a plausible progressive development of the existing case law. Thus if Télémarketing represented the state of the art in terms of the “new economy” in the early 1980s,25 the elaboration of a clear and robust rule against self-preferencing today may be necessary in order adequately to regulate the greatly increased reach and power of today’s Big Tech firms. The precise content and operation of this rule is a more difficult question, but also one, fortunately, lying beyond the scope of this short article.

 

V. CONCLUDING REMARKS

The purpose of this piece was to illustrate some of the — admittedly, more mundane — ways in which competition enforcers, specifically the European Commission, have already encountered and dealt with the operation of algorithms in their day-to-day enforcement activity. From this incomplete sample we can nonetheless discern a number of recurrent themes, that serve to better inform our understanding of the antitrust treatment of algorithms going forward.

As was noted in the introduction, algorithms have become part of the furniture in many markets, meaning that any proper grasp of how competition works in those sectors must at least account for their use and operation. This could conceivably be a complicated task where, for instance, the algorithm at issue has a “black box” quality that renders it difficult even for computer scientists to comprehend, let alone competition lawyers or economists. Yet, from an antitrust perspective, it is not the precise operation of the technology that is of interest, but rather its actual or anticipated impact on competition within the relevant marketplace. This, often, is a rather more straightforward question, as the examples discussed above illustrate.

Following on from the observation that the use of algorithms is now standard practice in many markets, the starting point for antitrust analysis in this context is, typically, that undertakings should not deliberately interfere with the free-functioning of such programs in a manner that is likely to have anticompetitive effect. Accordingly, attempts to restrict or manipulate the information that an algorithm receives, as in Guess, or to modify the operation of the algorithm in individual instances specifically to the detriment of competitors, as in Shopping, may well be conceived of as abusive or otherwise anticompetitive behavior. In both cases, the implication was that the ordinary, unobstructed operation of the algorithm represented the parameters of normal competition within the relevant market; deliberate efforts to avoid or diverge from the routine algorithmic processes that now delimit the competitive plane are thus generally suspect.

The principal exception is where it is the algorithm itself that causes the competition problem: antitrust enforcers, of course, should not defer to inherently anticompetitive technology. The anticipated problem of algorithmic collusion was noted at the outset, and it raises particular questions about liability for “deviant” technology. But the interaction between price-setting and price-monitoring technologies and RPM policies offers an alternative, more down-to-earth example. Article 101 is unlikely to prohibit the unilateral application of such technologies by individual firms, even where the practice occurs on a market-wide basis and has obvious effects in softening competition. Yet the parallel use of algorithms to, in effect, enforce and reinforce RPM practices, which do fall within the purview of Article 101, can serve to confirm the illegality of the latter. Accordingly, even if the technology itself does not amount to a “smoking gun” — something that may be difficult to establish — it may become part of a more contextualized story about how and why a particular practice, in specific market circumstances, violates the antitrust rules. As always in competition law, context is everything; the increasing abundance of algorithms that are being used to condition and direct the markets around us merely opens another chapter in this on-going story.


1 Law Department, London School of Economics.

2 Case AT.40428—Guess, Decision of December 17, 2018.

3 European Commission Press Release, Antitrust: Commission fines Google €1.49 billion for abusive practices in online advertising, March 20, 2019 (the AdSense decision remained unpublished at the time of writing).

4 European Commission, Final Report on the E-Commerce Sector Inquiry, COM(2017) 229 final, May 10, 2017.

5 Cases AT.40465—Asus, AT.40469—Denon & Marantz, AT.40181—Philips, AT.40182—Pioneer, Decisions of July 24, 2018, and Case AT.40428—Guess, Decision of 17 December 2018.

6 Specifically, Cases C-243/83 Binon v. AMP EU:C:1985:284, C-311/8 VVR v. Sociale Dienst van de Plaatselijke en Gewestelijke Overheidsdiensten EU:C:1987:418, and C-27/87 SPRL Louis Erauw-Jacquery v. La Hesbignonne SC EU:C:1988:183.

7 Leegin Creative Leather Products, Inc. v. PSKS, Inc., 551 U.S. 877 (2007).

8 Case C-345/14 SIA „Maxima Latvija” v. Konkurences padome EU:C:2015:784.

9 Case C-243/83 Binon v. AMP EU:C:1985:284, specifically paragraph 44.

10 In particular, Cases C-67/13 P CB v. Commission EU:C:2014:2204, C-307/18 Generics (UK) and Others EU:C:2020:52, and C-228/18 Budapest Bank and Others EU:C:2020:265.

11 As was ultimately the outcome in Case T-491/07 RENV CB v. Commission EU:T:2016:379.

12 Case AT.40465—Asus, paragraph 27.

13 Case AT.40182—Pioneer, paragraphs 134-139.

14 See, e.g. European Commission, Guidelines on Vertical Restraints (OJ C 130/1, 19.5.2010), paragraphs 223-225.

15 Case AT.39740—Google Search (Shopping), Decision of 27 June 2017.

16 Case AT.39740—Google Search (Shopping), paragraph 333.

17 Case AT.39740—Google Search (Shopping), paragraphs 645-652.

18 European Commission, Competition Policy for the Digital Era. A report by Jacques Crémer, Yves-Alexandre de Montjoye and Heike Schweitzer, April 4, 2019, p.7.

19 Ibid. p.60.

20 Ibid.

21 Ibid. p.61.

22 First established in Case C-322/81 Michelin v. Commission EU:C:1983:313.

23 Case AT.39740—Google Search (Shopping), paragraph 649.

24 For further critique of the reasoning in Google Search (Shopping), see Niamh Dunne, “Dispensing with Indispensability,” 16 Journal of Competition Law & Economics 74 (2020).

25 Case C-311/84 CBEM v. CLT and IPB EU:C:1985:394, relied upon in Google Search (Shopping) at paragraph 334.