The conversation around and study of the use of algorithms in pricing and other competitively sensitive decisions remains vibrant and is increasingly well-informed. Early theoretical work paved the way for government studies and more recently – and most interestingly – experimental and real-world empirical studies. At the same time, technology continues to advance, and with it the varieties and sophistication of software deployed. The law does not seem to have kept pace. Examples of enforcement to date are against pure cartel agreements that happen to have pricing algorithms as a tool for implementation. The most likely harms from deployment of pricing algorithms, increased capacity for optimal tacitly collusive outcomes, is unlikely to violate the law in any developed antitrust system. More speculative harms, including actual algorithmic collusion, seem to be equally outside of the realm of antitrust. And all of these considerations arise against a backdrop of efficiency considerations that while apparent seem to be under-theorized and under-studied. We outline findings on algorithmic pricing in theoretical and empirical research, how they interact with existing legal rules, and suggest promising areas for future study and policy development.

By Max Huffman & Dr. Maria José Schmidt-Kessen[1]

 

I. INTRODUCTION

Beginning with an important paper by Salil Mehra,[2] the last six years has seen animated conversation and a growing body of literature by academics and policymakers on the potential threat for markets from coordinated marketplace conduct facilitated by use of algorithms in pricing and other competitively sensitive decisions. At the extreme, such coordination might rise to the level of algorithmic collusion. The potential for algorithmic collusion to occur derives from the fact that across broad swaths of the economy, pricing decisions are increasingly being automated or partially delegated to algorithms, which may have the capacity to operate to optimize outcomes with limited or no human intervention.

Ariel Ezrachi & Maurice Stucke outlined four scenarios for in which the use of algorithms might lead to collusive outcomes in markets: (1) the algorithm as messenger, (2) the algorithm as hub in a hub-and-spoke agreement, (3) the algorithm as predictable agent, and (4) the algorithm as an autonomous agent.[3] The model matters: the correct selection and application of legal rules differ based both on the type of algorithm and on the enterprise structure in which the algorithm is deployed. These differences produce an immense variety of analytical frames leading, on application of competition law, to potentially different outcomes. This renders unanswerable the broad question whether algorithmic pricing is harmful or beneficial for market competition. In prior scholarship we have tried to address that question at a more granular level.

In this piece we address the latter three Ezrachi-Stucke scenarios, namely first algorithmic pricing implemented in a centrally orchestrated fashion via an online platform (hub-and-spoke), and second, pricing algorithms of varying sophistication deployed by traders individually (predictable agent and autonomous agent schemes). We highlight some of the findings and some of the open questions that will have to be resolved before a clear line can be drawn between the legitimate use of algorithmic pricing and anti-competitive algorithmic pricing. We reach a broad summary conclusion that theories of harm are robust. Ongoing attention by policymakers, enforcers, and scholars must also engage questions of efficient outcomes algorithmic decision-making can enable.

 

II. CENTRALIZED ALGORITHMIC PRICING

A broad category of use of algorithms relates to pricing of diffuse offerings centralized in a single hub, which characterizes online platform enterprises. In recent work we studied the effect of algorithmic pricing in the hub-and-spoke structure of service provider-platform agreements, analyzing the expected treatment under both EU and U.S. competition law.[4] Algorithmic pricing and the speed of information processing – the consideration of scores of variables in pricing decisions, rather than the handful that can be considered by a human decisionmaker – presents questions of speed of decision-making and breadth of information processing that heighten concerns for both coordinated outcomes and maintenance of dominance. At the same time, these outcomes arise in the presence of apparent transaction efficiencies, with indeterminate trade-offs; the likely legal analysis also differs depending on the degree of complexity of the pricing algorithm. We conclude that EU and U.S. competition law systems approach this indeterminacy from opposite defaults, with the EU defaulting to prohibition and the U.S. defaulting to permissive treatment.

Our analysis relies on a deliberately simplistic binary distinction between “if-then” algorithms and “machine learning” algorithms (abbreviated “ML”). The if-then algorithm defines a path to an outcome based on observed inputs – for example, a marketing manager might instruct the software to under-cut the advertised prices of an established group of known competitors by a set discount. The simplicity of this command does not undermine the important role of the software in pricing, which is better able than a human agent to monitor competitor conduct and continually to update prices. However, the software in this example does nothing that is not directly commanded by a human agent. The results of the commands are highly predictable and can be reverse-engineered; it is not unreasonable to attribute those results to the human responsible for the computerized decision. Thus, agencies both in the U.S. and UK have not had difficulty imposing liability on human actors who have used algorithms as the mechanism to execute cartel agreements.[5]

The ML algorithm differs in that it is recursive. In addition to searching for information it is programmed to consider, and responding to that information, the ML algorithm records the results of its response and adjusts its future decisions based on those results. For example, the same if-then command might produce a particular sales volume and net profit, which the algorithm would take into account when deciding how to react to competitor pricing in a second period. This more reactive software might be expected to engage in continual refinement, increasing the data gleaned from past pricing decisions, and move toward higher profit outcomes.

The more complex set of variables and decision-making process in machine learning reduces predictability and the potential for reverse-engineering decisions. It also abstracts ultimate pricing decisions from the point of human intervention. This ML algorithm reflects an entry point into the general space of “artificial intelligence,” where software engages in optimization and improves its own results both without human intervention and to a degree beyond that which human actors may have been able to achieve on their own. Much of the academic study and policy analysis as regards algorithmic pricing considers these ML algorithms, positing that software packages may “communicate” and perhaps “agree,” despite conduct not being attributable to a person.

The centralized algorithmic pricing model arises in the context of hub-and-spoke coordination, with the algorithm deployed by a firm that employs, retains as contractors, or provides pricing and other services to, highly diffuse input suppliers.[6] In both the EU and the U.S., as established in cases including AC Treuhand v. Commission (EU) and Apple e-Books (U.S.), hub-and-spoke structures are analyzed as antitrust conspiracies where there is evidence suggesting communication, or at least mutual understanding, among the spokes, in contrast with purely parallel vertical agreements between the spokes and the hubs.[7]

Where the spokes – in a gig economy enterprise, such as a ride-sharing platform, the individual service suppliers – merely sign on to a price structure established by an algorithm deployed by the hub, the question of communication among spokes may depend on the degree to which each understood, and relied on, competitors’ being subject to the same terms. Mutual understanding and reliance are more likely to arise in a simpler if-then pricing algorithm, with substantial insight into pricing decisions and consequent ability to rely on mutuality among suppliers. In contrast, the black box of the ML algorithm undermines insight into pricing decisions. In the absence of express evidence of coordination, this lack of insight should undermine a conclusion of hub-and-spoke conspiracy. This result seems contrary to emerging academic and policy consensus that ML and black-box pricing decisions are the primary concerns in algorithmic pricing.

 

III. DECENTRALIZED ALGORITHMIC PRICING

Outside of the hub-and-spoke structure potential algorithmic coordination is not centralized by a platform. This removes one non-conspiratorial link between competitors that, under the constraints discussed above, may elevate conduct otherwise considered innocently parallel or tacitly collusive to the level of antitrust conspiracy. In a forthcoming chapter we analyze the impact of the varieties of pricing algorithms on the antitrust treatment of observed coordination, again through a comparative lens with particular attention to North American and European competition policy.[8]

When we get more granular than the simple if-then/ML distinction, a taxonomy of pricing algorithms based on existing types of machine learning techniques treats separately (1) supervised learning, with inputs and outputs entered by humans until the software develops independent capacity to predict outputs from a given input, from (2) unsupervised learning, with inputs entered and the software enabled to seek optimal outcomes, and (3) reinforcement learning, a form of unsupervised learning where the software is programmed to seek a result through trial and error. The most-frequently discussed reinforcement learning agent is the Q-learning algorithm, whereby software is programmed to maximize rewards by predicting the outcome of each action and updating the algorithm with the results produced. Other forms of learning software include Deep Neural Networks (“DNN”), an entirely different design structure based on interconnected layers of artificial neurons that simulates the functioning of the human brain. DNN learning can also be supervised, unsupervised, or reinforcement learning, and the learning process can involve modifying the connections between the layers to produce different results. The complexity, and variability, of the input-outcome processes makes them difficult or impossible to understand, giving rise to concerns for DNN algorithms as “black boxes.” Another is the Random Forest, combining the performance of many decision trees, offering computational efficiencies that require less data at the input stage. Relative to DNN algorithms, Random Forests are reported to be more transparent and less resource-intensive.[9]

Experiments with sophisticated reinforcement learning algorithms have demonstrated collusive outcomes are possible in the absence of human intervention. Features supporting coordination include the quantity of data and speed of processing; memory of prior interactions between algorithms; capacity of algorithms to communicate; the pace of learning; and less complex algorithmic decision process. This last feature is important: following our conclusion with regard to hub-and-spoke conspiracies discussed above, the more opaque the decision process, the less likely the experimental collusive result, apparently because insight into the decision process is key to coordinating outcomes.[10]

A recent study of gasoline pricing in stations that evidence suggests adopted pricing software reflects the only real-world empirical survey of market impacts from the adoption of algorithmic pricing. Stephanie Assad et al. in 2021 report post-adoption price increases of 0.6c per liter and profit increases of 0.8c per liter (approximately 9 percent) among stations post-adoption. Notably, stations in monopoly markets did not show any increase, which suggests the post-adoption price level compares well to the monopoly price level. While the overall effect is to see average prices increase to the monopoly level, Assad et al. report results that may produce consumer benefits, including a decrease in the highest prices charged and a greater tendency in duopoly markets to match competitor price decreases. (This is an ambiguous finding, as matching a decrease can be a disciplining strategy in oligopoly markets.) Assad et al. make another important finding, noting an approximate one-year delay between adoption and reaching the monopoly price, which suggests the algorithms facilitate tacit, rather than express, collusion.[11]

The legal treatment of algorithm-based pricing and its possible effects is as yet undetermined. Both EU and U.S. law readily prohibit as illegal per se, or as restriction by object, agreements as to price or related competitive factors, and existing prosecutions based on algorithmic pricing have involved express collusion between human actors using pricing algorithms to execute the collusive scheme.[12] Little question should exist that mere deployment of an algorithm, leading to coordinated results through tacit collusion, would implicate the de facto immunity from prosecution under rules governing anticompetitive agreements, even though algorithms may be more successful than tacitly colluding humans in producing coordinated prices.[13]

The resolution of two middle-ground questions will be highly fact-dependent: first, what is the effect of agreement among human actors to implement an algorithm, knowing of the software’s superior capacity to produce tacitly collusive outcomes? And second, what is the effect of actual agreement – if philosophically possible – between two algorithms, deployed by human actors without intention to reach agreement? The first question should be resolved by a rule drawn from the law governing information sharing, whereby an agreement to share information that is likely to lead to coordination might be readily challenged under a rule of reason or quick-look standard. In the EU, the rarely-litigated question of collective dominance, with algorithms meeting the Airtours criteria,[14] might be a guide for enforcement against tacit collusion by algorithm.

The second question has no good analogy in competition law and is just as likely to be resolved by regulation as it is by resort to principles of competition law. However, some of the governmental or inter-governmental reports on algorithm use have suggested updating the law of agreement to consider rapid price adjustments leading to monopoly outcomes to constitute a de jure agreement.[15] If such a broadening of the agreement element were to occur to cover instances of tacit collusion brought about by algorithms, jurisdictions would need to be certain to allow consideration of efficiencies rather than to resort to per se condemnation – something the EU approach under Article 101(3) is better suited to achieve than is the U.S. per se standard.

 

IV. HOW TO QUANTIFY EFFICIENCIES FROM ALGORITHMIC PRICING?

One question that neither academic literature nor policy reports have tackled in depth is how to assess any efficiencies from algorithmic pricing that should factor into a rule of reason analysis under U.S. antitrust law or could be considered under an effects analysis under Article 101(1) TFEU or the efficiency defense under Article 101(3) TFEU. The importance of efficiencies is all the greater if jurisdictions follow suggestions to broaden the concept of agreement to include agreement without human agent interference, such as the idea of rapid price changes leading to monopoly outcomes serving as a de jure agreement.

On its face, EU law provides greater clarity as to the operation of the efficiency defense. Article 101(3) and the Commission’s interpreting guidelines[16] outline four elements to a credible efficiency defense: (1) “improving the production or distribution of goods or contribut[ing] to promoting technical or economic progress”; (2) “Consumers . . . receiv[ing] a fair share of the resulting benefits”; (3) the “restrictions[’ . . .] indispensab[ility] to the attainment of these objectives”; and (4) not ”eliminating competition in respect of a substantial part of the products concerned”. Relative size of the harms and benefits is also relevant: “efficiencies generated by the restrictive agreement within a relevant market must be sufficient to outweigh the anti-competitive effects produced by the agreement within that same relevant market.”[17] The burden is on the defendants to quantify or predict, and justify the quantification and prediction, of those efficiencies.[18]

This may be particularly difficult in the case of algorithmic pricing, where competitors might not be fully aware of tacit coordination, not to mention the concrete efficiency gains from it. In practical terms, however, quantifying the relative size of an effect or an efficiency is less science than art, and in that way is analogizable to the proof of efficiencies under the rule of reason in U.S. law. In the U.S., Supreme Court precedent establishes broad standards which require that claimed efficiencies, to be cognizable, be economic in nature and the restraint not be substantially more restrictive than necessary to achieve them.[19] Competitor collaboration guidelines, while dated, give slightly more content to those vague rules, imposing requirements of verifiability, potentially procompetitiveness, reasonable necessity, and lack of a less restrictive alternative. In the presence of such an efficiency, the rule of reason question turns on the “overall competitive effect,” considering whether the efficiencies are likely to outweigh the harm from the collaboration. While consumer pass-through is not an express requirement, the primary example of gain offsetting harm is “preventing price increases.”[20]

The TFEU 101(3) efficiency defense, as applied in the Luxembourgish Competition Council’s 2018 Webtaxi decision, permits evidence of efficiencies as creating an individual exemption to what would otherwise be a conclusion of restriction by object under TFEU 101(1).[21] The algorithm deployed by the B2B platform defendant allocated rides among competing taxi services, but in the process created benefits including reduced incidents of empty taxis, a central contact point for consumers, efficient management of ebbs and flows in demand, and on net lower prices than comparable services. The speed and efficiency of the service was a function of the algorithm itself, suggesting no less restrictive alternative was available.

Such an analysis would not typically be available in the U.S. under an application Sherman Act Section 1, which – if the requisite agreement were identified – would be unlikely to accommodate efficiency arguments due to the application of the per se rule. However, the clear benefits to competition from platform coordination of service providers in markets such as that for ride share suggests a better approach is to treat any identified agreement under a quick look rule of reason approach, placing the burden to show efficiency justifications on the platform.[22] Of the Webtaxi efficiencies, speed and efficiency of service and net lower costs should be cognizable under U.S. law; others, including reduction in empty taxis, efficient management of ebbs and flows in demand, reduction in pollution, and a central contact point, may be less likely to constitute economic benefits offsetting the harms from an agreement.

The role of algorithmic pricing in the operation of gig economy platforms highlights the efficiencies produced by centralizing and computerizing decisions even on competitively sensitive matters, such as price, output, and scheduling. The quantification problem remains unresolved, however, and it appears certain substantial empirical work is required. Regarding the decentralized deployment of algorithmic prices and risks from tacit collusion, the existing efficiency framework may require complete rethinking. After all, the efficiencies should be proved to emerge from the collaboration itself, and may not translate to a scenario where coordination is not necessarily intended by human actors that deploy pricing algorithms.

 

V. CONCLUSION

The question of how to evaluate the use of algorithmic pricing by competitors under antitrust rules in the U.S. and EU is unlikely to go away soon. Rapid developments in technology and digital business strategies indicate that algorithmic pricing is likely to only grow in importance as a market phenomenon. In order to adjust antitrust analysis to this new phenomenon, further study is needed both at both theoretical and empirical levels. In particular:

  • The question of whether the concept of agreement should be and can practicably be broadened is important;
  • We need more observations and evidence regarding the types of algorithms and machine learning techniques for pricing and their effect on market outcomes; and
  • We need to understand how to quantify and assess efficiencies from algorithmic pricing in order to arrive at sound antitrust policies.

[1] Huffman is Professor of Law, Indiana Univ.-McKinney School of Law and Senior Research Fellow, Loyola Univ.-Chicago Institute for Consumer Antitrust Studies; Schmidt-Kessen is Assistant Professor in Competition and Intellectual Property Law, Legal Studies Department, Central European University.

[2] Mehra, Salil (2016). “Antitrust and the Roboseller: Competition in the Time of Algorithms,” Minnesota Law Review 100, 1323-1375.

[3] Ezrachi, Ariel & Stucke, Maurice (2016). Virtual Competition. Cambridge, Mass: Harvard University Press.

[4] Huffman & Schmidt-Kessen, Gig Platforms as Hub-and Spoke Arrangements and Algorithmic Pricing: A Comparative EU-US Analysis, Univ. Toulouse-1 Capitole (forthcoming), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3969194.

[5] United States v. David Topkins, Plea Agreement, Crim. No. 15-201 (N.D. Cal. Apr. 30, 2015); Online sales of posters and frames, Case No. 50223 (CMA Aug. 12, 2016).

[6] The relationship matters greatly for purposes of the basic question of agreement, but is tangential to our question here. Anderson & Huffman (2017). “The Sharing Economy Meets the Sherman Act: Is Uber a Firm, a Cartel, or Something In-Between?” Colum. Bus. L. Rev., Vol. 2017, p. 859; Nowag (2018). “When Sharing Platforms Fix Sellers’ Prices.” 2018. J. Antitrust Enf., Vol. 6, pp. 296-354.

[7] Case C-194/14 P AC Treuhand v. Commission; Case C-74/14 ETURAS; Toys ’R’ Us Inc. v. FTC, 221 F.3d 928 (7 th Cir. 2000); United States v. Apple Inc., 791 F.3d 290 (2015).

[8] Maria Jose Schmidt-Kessen & Max Huffman, “Antitrust Law and Coordination through AI-Based Pricing Technologies,” Inteligência Artificial da Unidade de Investigação da Faculdade de Direito, Universidade Católica Portuguesa (Springer, forthcoming 2022).

[9] Research on algorithms from sources including Calvano, Emilio, Calzolari, Giacomo, Denicolo, Vicenzo & Pastorello, Sergio (2019). “Algorithmic Pricing What Implications for Competition Policy?” Review of Industrial Organization55:155–171; Klein (2021). “Autonomous Algorithmic Collusion: Q-Learning Under Sequential Pricing” RAND Journal of Economics (forthcoming); Montes, James (2020). “3 Reasons to Use Random Forest Over a Neural Network,” available at https://towardsdatascience.com/3-reasons-to-use-random-forest-over-a-neural-network-comparing-machine-learning-versus-deep-f9d65a154d89#:~:text=Both%20the%20Random%20Forest%20and,are%20exclusive%20to%20Deep%20Learning; Nicholson, Chris (2021). A Beginner’s Guide to Neural Networks and Deep Learning, https://wiki.pathmind.com/neural-network.

[10] Studies of collusive outcomes discussed at Hettich, Mathias (2021). “Algorithmic Collusion: Insights from Deep Learning” (February 16, 2021). Available at https://ssrn.com/abstract=3785966; Schwalbe, Ulrich (2019). “Algorithms, Machine Learning, and Collusion,” Journal of Competition Law & Economics, 14(4), 568–607; Klein (2021). “Autonomous Algorithmic Collusion: Q-Learning Under Sequential Pricing” RAND Journal of Economics (forthcoming); Calvano, Emilio, Calzolari, Giacomo, Denicolo, Vicenzo & Pastorello, Sergio (2020). “Artificial Intelligence, Algorithmic Pricing, and Collusion” American Economic Review110(10): 3267–3297.

[11] Assad, Stephanie, Clark, Robert, Ershov, Daniel & Xu, Lei (2021). “Algorithmic Pricing and Competition: Empirical Evidence from the German Gasoline Market,“ available at https://www.chicagobooth.edu/-/media/Research/Kilts/docs/qme2021paper32AlgorithmicPricingandCompetitionEmpiricalEvidencefromtheGermanRetailGasolineMarket.

[12] United States v. David Topkins, Plea Agreement, Crim. No. 15-201 (N.D. Cal. Apr. 30, 2015); Online sales of posters and frames, Case No. 50223 (CMA 12 Aug. 2016).

[13]See, e.g., In re Text Messaging Antitrust Litig., 782 F.3d 867, 874 (7thCir. 2015); Cases C-40 to 48, 50, 54 to 56, 111, 113 and 114-73 Suiker Unie; Case 172/80 Zünchner v Bayerische Vereinsbank; Case T-442/08 Cisac v Commission [2013].

[14] See judgment from the EU General Court in Case T-342/99, Airtours v. Commission.

[15] E.g., OECD (2017). Algorithms and Collusion, https://www.oecd.org/competition/algorithms-and-collusion.htm.

[16] EU Commission Guidelines on the application of Article 81(3) of the Treaty (2004/C 101/08).

[17] Ibid, at para. 43

[18] Ibid, paras. 34, 43.

[19] Soc’y of Prof. Eng’rs v. United States, 435 U.S. 679 (1978); NCAA v. Alston, 141 S. Ct. 2141 (2021).

[20] U.S. Dep’t of Justice & Fed. Trade Comm’n, Competitor Collaboration Guidelines sections 2.1, 3.36, 3.37 (2000).

[21] Conseil de la Concurrence, Décision no. 2018-FO-01 du 7 juin 2018 – Webtaxi S.à.r.l.

[22] Anderson & Huffman (2017). “The Sharing Economy Meets the Sherman Act: Is Uber a Firm, a Cartel, or Something In-Between?” Colum. Bus. L. Rev., Vol. 2017, p. 859.