By John Moore, Etienne Pfister & Henri Piffaut

Whether algorithms do increase the risk of tacit collusion remains very uncertain. Yet, if this is the case, the consequences for the effectiveness of regulation and of competition policy could be important and regulators need to think on how to make sure that this risk of algorithmic collusion is reduced while simultaneously preserving firms’ incentives to adopt such efficiency-enhancing mechanisms. This article reviews the latest experimental evidence of algorithmic collusion and its limitations. It then analyses some of the various solutions brought forward to adapt competition policy in order to tackle the issue of algorithmic tacit collusion, with a particular focus on possible complementarities between regulatory tools and competition enforcement.

By John Moore, Etienne Pfister, & Henri Piffaut1

 

I. INTRODUCTION

A series of influential academic studies have highlighted the risk that algorithms may facilitate tacit collusion.2 That has led to a growing interest among competition authorities and practitioners. Indeed, over the last couple of years, the UK, the French and German, and the Portuguese competition authorities have all published reports addressing this particular issue.3 However, to the best of our knowledge, there is yet to be a case of purely algorithmic tacit collusion sanctioned by a European competition authority.

This notable absence of cases could be due to several, non-mutually exclusive, reasons. First, the risk of algorithmic tacit collusion may have been overestimated, at least in the short run. For instance, firms may be reluctant to use black-box pricing algorithms of the type used in the experiments showing that algorithms can lead to tacit collusion; real-life market conditions may also be too remote from the experimental settings where algorithm-supported tacit collusion was detected. Second, competition authorities could be ill-equipped to detect tacit collusion due to insufficient data monitoring. Furthermore, under the current legal framework (which requires either an agreement or a concerted practice to tackle collusion), it is not obvious that a purely tacit collusion, which would not involve any data exchange or any communication between firms, could be qualified as an anticompetitive practice.

Still, whether algorithms do increase the risk of tacit collusion remains very uncertain. Yet, if this is the case, the consequences for the effectiveness of regulation and of competition policy could be very important and regulators need to think about how to make sure that this risk of algorithmic collusion is reduced while simultaneously preserving firms’ incentives to adopt such efficiency-enhancing mechanisms. After having reviewed the latest experimental evidence of algorithmic collusion and its limitations (Section 2), this article analyses some of the various solutions brought forward to adapt competition policy in order to tackle the issue of algorithmic tacit collusion (Section 3). In Section 4, we discuss possible complementarities between regulatory tools and competition enforcement, as well as the obstacles that continue to lie ahead. Section 5 concludes.

 

II. THE EXPERIMENTAL RESULTS ON ALGORITHMS AND TACIT COLLUSION

A. Presentation of the Results

Many papers that aim to assess the risk of tacit collusion due to algorithm pricing consider an experimental framework where algorithms “play” against each other in a setting that mimics a competitive environment. In their simplest form, the games consist of two identical algorithms, one per firm, each setting a price for a substitutable good or service given the firms’ production function and the (estimated) level of demand. Very often, a certain degree of common knowledge is assumed, regarding demand conditions or competitors’ prices for instance. The objective of the algorithms is to maximize the long-term profits of their respective firms. After setting its price, each algorithm then observes the outcomes: the price set by its competitor and the rewards earned from the fictive sales of the product given the prices set by the two firms. This experience is then repeated a certain number of times until the system stabilizes. In the end, the results of the experiment are generally compared with the perfectly competitive case to assess the degree of tacit collusion.

Experimental results have shown that, in such settings, tacit collusion, i.e. supra competitive prices or profits, is likely to be attained, at least after a given number of iterations.4 However, these experimental settings have often been criticized for being too simplistic compared with real-life markets (see below). To address this issue, some researchers have tested whether their experimental results still hold when adding complexity to the framework. For instance, in a series of robustness checks, Calvano et al. (2019b)5 consider stochastic (rather than a stable and perfectly known) demand for the good, the possibility of entry of a third firm during the experiment, or the case of asymmetric companies. Their results show that the added complexity reduces the level of supra-competitive profits or increases the time span before a collusive outcome is reached. For instance, Klein (2019) shows that the speed of convergence to collusion decreases as the number of discrete prices algorithms may use increases.6 Yet, in spite of this complexity, algorithms still manage to reach some degree of coordination, as Calvano et al. (2019b)7 show. Similarly, Hansen et al. (2020)8 show that collusion is still likely to be attained even when algorithms do not observe each other’s prices. All in all, it may be inferred from these various experiments that factors such as symmetry between firms, market stability, simple decision rules, and market transparency, while not an absolute prerequisite to algorithmic collusion (at least in the experimental settings described above), may facilitate its attainment or the level of profits (and hence the consumer loss) it generates.

B. Limitations of the Experimental Framework

In spite of these findings, a significant number of researchers and practitioners doubt that algorithms could end up colluding in real-life market settings (see for instance Kühn & Tadelis (2017),9 Schwalbe (2018)10). They set forth five main criticisms:

  • First, the comparison between the prices set by the algorithms during the experiments and the theoretical results concerning the level of prices in a perfectly competitive environment is uninformative. Absent pricing algorithms, managers would set firms’ prices and such prices would likely be higher than the perfectly competitive benchmark if not higher than the prices set by the algorithms if managers also learn to tacitly collude. Unfortunately, the high number of iterations used in experimental studies on algorithms prevents researchers from replicating their studies with human players in order to assess the level at which the benchmark should be set. It is thus impossible to know whether, and to what extent, pricing algorithms enable to worse collusive equilibriums than human managers in the long run.
  • Second, it is unclear whether firms would ever accept using the types of pricing algorithms considered in experimental studies. The algorithms used in the experimental literature are often black-box algorithms11 whose actions cannot easily be understood or explained, as opposed to descriptive algorithms. Firms may be reluctant to delegate their decision-making powers on prices to such algorithms. Furthermore, before potentially reaching any collusive outcome, Q-learning algorithms12 such as the ones used in the experimental literature are likely to lead to losses for the firms, especially during the learning/training phase where the algorithms explore the rewards associated with different sets of prices. These losses may not be negligible or short term as some time is needed before the firm is actually able to assess the level of profits associated with a given price. Firms may then be reluctant to accept such sacrifices, even in spite of the prospect of higher revenues due to tacit collusion in the longer run.
  • Third, even if firms adopted pricing algorithms, these firms could want to exploit the data processing capabilities of algorithms to charge different prices to different (classes of) consumers using information collected on these (classes of) consumers. However, as underlined by the CMA in their report on pricing algorithms,13 tacit collusion and personalized pricing are very unlikely to occur simultaneously: “without explicit communication and sharing of information, if there are many differentiated products and personalised prices, then it appears far more difficult to reach a common understanding of the terms of coordination.
  • Fourth, experiments on algorithmic collusion rely on strong assumptions on the economic environment. These settings usually consider only two players using the same algorithm each selling a single product, no risk of entry, a stable demand, discrete and uniform prices, etc. Calvano (2019b)14 and Hansen et al. (2020),15 among others, show that relaxing some of these assumptions individually may not decrease the risk of collusion to a great extent (see above). Yet, as argued by the Autorité de la concurrence & the Bundeskartellamt, “a real-life market environment is likely to encompass several sources of complexity simultaneously. Their joint effect on the likelihood of collusion remains an open question for future economic research.”16 Furthermore, in real life, algorithms may have to constantly re-learn how to price given the changing complexity of the environment.
  • Finally, it would seem that the positive effects of pricing software are often ignored or, at least, unaccounted for in these studies. There would need to be some analysis of whether these efficiencies are correlated to the collusive outcomes. In that case, the positive effects enabled by pricing algorithms, in particular, potential efficiency gains, could offset at least part of the costs that could arise from algorithms reaching collusive outcomes in some cases. Consequently, additional work should be done to inform on whether the net effect of pricing algorithms is negative or positive compared to a but-for world.

As a result, whether the increasing use of pricing algorithms by firms is likely to lead to more tacit collusion remains very debated.

 

III. POSSIBLE DIFFICULTIES FOR COMPETITION POLICY AND FOR THE REGULATORY FRAMEWORK

The relevance of the experimental framework used to demonstrate the risk of algorithm-based tacit collusion has important implications for competition authorities. If pricing algorithms are a risk for competition, competition policy has to address that risk. Conversely, if this risk is non-existent or over-estimated, such actions might only or mainly generate costs for companies and customers alike, and thus negatively affect economic efficiency as a whole. In addition, monitoring algorithms is also likely to increase the burden of regulatory agencies, thus requiring more resources or diverting existing resources from other tasks.

There is no consensus on the opportunity for competition policy to address algorithmic pricing. Some consider that the experimental framework is way too remote from the real world and that the higher risk of tacit collusion through algorithms compared to through humans is not sufficiently demonstrated. Advocates of this position often argue that no action should be taken. For instance, Schwalbe (2018)17 considers that competition authorities should not waste important and finite resources on the topic: “the limited resources of competition authorities should rather be devoted to more pressing problems as, for example, the abuse of dominant positions by large online-platforms.”

While recognizing that the plausibility of algorithm-based tacit collusion is uncertain, some others insist that it cannot be ruled out, that only simple algorithms are used in the experiments and that more complex algorithms could certainly entail more collusion than is illustrated in these papers.18 They argue that this issue, although now of uncertain relevance, will become more pressing in the future. Several regulatory tools to help address the risks associated with pricing algorithms have been discussed relying either on a new regulatory regime or on competition law enforcement.

A. Some Regulatory Solutions

Banning pricing algorithms altogether seems excessive.19 Yet banning certain types of algorithms has been envisaged. For instance, descriptive algorithms, because they are easy to understand and thus to assess under competition law enforcement if needed, could be thus allowed. On the contrary, the effects of so called “black box” algorithms are harder to anticipate, monitor or interpret; the collusion that could stem from the use of such algorithms is thus more difficult to detect and to demonstrate – as a result, some could call for such algorithms to be banned. However, compared to descriptive algorithms, black box algorithms are used because they present specific advantages that are unrelated to the possibility of tacit collusion. For instance, compared with descriptive algorithms, black-box algorithms usually automatically evolve, in particular regarding their pricing strategies, when faced with changing external conditions. Thus, banning them could generate costs that may exceed the benefits of reducing the risk of algorithm-based tacit collusion (Calvano et al. (2019a)).20 Furthermore, less drastic solutions could also help reduce the risk of tacit collusion while preserving the efficiency gains associated with black box algorithms.

Hence, some propose to use an approach targeted on the design stage of an algorithm either through conformity by design or through sandbox testing. These are very much consistent with proposals in other public policy fields such as ethics.21 Hence, following the “conformity by design” approach, competition rules should be integrated into the algorithms. While this is an attractive approach on paper, the competition rules that the design of algorithms should integrate are often unclear. Tacit collusion is not, by itself, anticompetitive and firms have the right to adjust their prices to their competitors’ behaviour. A conformity by design approach would need to identify some key parameters embodied in the algorithm which may facilitate collusion such as speed of price changes, signalling attempts, or information gathering.

The “sandbox testing” approach requires that tests are made before the algorithm is used in order to assess the risk of collusion (Ezrachi & Stucke (2016)).22 These tests should be conducted in an environment that mimics the competitive environment. Competition authorities should then have the power to ban certain algorithms, which are prone to collusion,23 or recommend changes to the code of the algorithms to limit the risk of collusion. This approach is fraught with difficulties. First, as already explained, it can be difficult to design a framework that mimics the real competitive environment; this framework may prove hard to design and hard to define ex ante, in guidelines for example. Second, tests should have to comply with some principles (to ensure integrity, verifiability, replicability, etc.) so that outcomes can be trusted and compared. Third, the number of these tests could be quite high if every algorithm is to be tested. Fourth, there is the question of who could undertake such tests: third parties, the companies themselves, etc. Finally, the results of the tests may also depend on the competitors’ algorithms, which may not be known by the company that is testing its own algorithm and which may change or be adapted during activity.

B. Some Competition Enforcement Issues

The detection of algorithmic tacit collusion by competition authorities could be an arduous task. To facilitate detection, some authors have for instance suggested creating a whistleblower bounty program, where consumers could report potential cases of algorithmic collusion (Lamontanaro (2020)).24 But even if detection could be simplified, proving the wrongdoing could also end up being strenuous if the competition authority lacks the necessary data. To alleviate this issue, a regulatory regime could require companies to keep a certain number of data on their past actions, their algorithm and a set of parameters (Marty et al. (2019)).25 That would require that the algorithm be designed in such a way that its actions can be monitored. Data mining methods could also be used to detect abnormal algorithm decisions as well as to detect those companies whose price evolutions need to be investigated.

Some legal aspects are also important. Indeed, under current EU law, Article 101 can apply only when either an agreement or a concerted practice has been established. A concerted practice means a “form of coordination between undertakings, which, without having been taken to the stage where an agreement […] has been concluded, knowingly substitutes for the risks of competition, practical cooperation between them,” in particular “any direct or indirect contact between such operators by which an undertaking may influence the conduct on the market of its actual or potential competitors or disclose to them its decisions or intentions concerning its own conduct on the market.26 However, Article 101 “does not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors.” In other words, when companies are acting unilaterally without an element of communication, Article 101 cannot apply. Hence, a competition authority can only condemn tacit collusion if it has shown that the pricing equilibrium is due to tacit collusion rather than to intelligent adaptation by the firms.27 Drawing a line between communication and absence of communication is a difficult exercise, so the risk of false positive/false negative outcomes has led to an absence of cases targeted towards tacit collusion.28 Practically, tacit collusion may only be sanctioned indirectly, if it is proven that the algorithms have used devices implying some form of communication (such as data exchanges between algorithms for instance).

Hence, if algorithms do increase the risk of tacit collusion, the fact that this kind of collusion is not currently forbidden by competition rules may constitute one of the most important obstacles faced by competition authorities when dealing with algorithmic collusion. Relying exclusively on competition law enforcement would thus require a change of competition rules or, at the very least, a change of stand by competition authorities on tacit collusion. It would appear that a dedicated regulatory regime would be required to enable any competition law enforcement. However, taking a different approach to tacit collusion would raise a number of arduous questions for competition authorities. For instance, should algorithmic tacit collusion and other types of tacit collusion be treated in the same way? In addition, an appropriate standard of proof concerning algorithmic-based tacit collusion may prove hard to determine.29

 

IV. TOWARDS A UNIFYING REGULATORY FRAMEWORK? PRINCIPLE AND OBSTACLES

A. Principle

As appears from the discussion above, competition law enforcement and dedicated regulation as regards algorithmic pricing are not necessarily alternatives but rather complements. Both may be combined to increase the effectiveness of competition policy vis-à-vis algorithmic pricing. In order to overcome the limitations of either type of approach, one could envision a framework where firms could be required (or incentivized) first to test their algorithms prior to deployment in real market conditions (“risk assessment”), then to monitor the consequences of deployment (“harm identification”). In addition, a monitoring body may also be in charge of this two-stage assessment. Hence, when acquiring or developing algorithmic pricing software, a company would have to make an assessment of the risks that the use of this algorithm entails in terms of compliance with existing regulations (including the risk that the use of this algorithm may trigger parallel increases in prices). Alternatively, a monitoring body could be in charge of testing these algorithms or designing a system that would ensure proper testing of algorithms. During deployment and use of the system, the company would be required (or incentivized) to monitor the behavior of the system against some public policy objectives (such as the maintaining of competition on the markets).

Proving that parallel upwards price movements are tacit collusion rather than a mere adaptation to competitors’ prices may turn out to be more feasible in such a setting. Indeed, if a dedicated regulation imposed that some testing were made before a given algorithm is deployed and if it were found that this pricing algorithm is highly likely to converge to a tacit collusion equilibrium, the burden of proof to demonstrate that there is tacit collusion on the market when the algorithm is deployed may be lowered as the risk of error (i.e., false positive) has, at least theoretically, been lowered. Also, once the algorithm is running, the presence of monitoring mechanisms should help the company identify situations of tacit collusion. In such circumstances, the risks for a competition authority to adopt a false positive decision over the practices are decreased compared to the situation without algorithms. Alternatively, if the testing and monitoring do not show a risk of collusion, it becomes less likely that an authority would adopt a false negative decision (absence of collusion when there is actually one). Hence, one could argue that this would change the error cost since the potential harmful effect of algorithms could be flagged in advance.

Hence in such a setting, dedicated regulation would support competition law enforcement by helping to separate cases of tacit collusion from mere adaptations of prices. The benefits from both regimes would also be maintained: a higher flexibility is allowed yet the testing made before-hand and the monitoring made after the algorithm is deployed help ensure that cases of algorithmic tacit collusion can indeed be prosecuted and/or avoided. Yet, this basic theoretical framework still leaves several questions unanswered.

B. Limitations

First, the testing made during the “risk-assessment” phase should seek to replicate real life circumstances. This requires making assumptions in such a way that hypothetical market conditions and outcomes will approximate reality with minimum distance. The key parameters that should be reflected in testing would include market structure, transparency and stability and simplicity/complexness of decision rules. It is probable that an over-simplified sandbox testing leads to a frequent outcome of collusion as some of the papers discussed above have shown. Yet, as also discussed above, this way of testing algorithms does not reflect the economic reality of the industry where the algorithm will be used: the environment and its dynamics as well as the market interactions are often more complex than those captured in the model. Also, these tests must make a choice on whether all market participants use the same tested pricing algorithm or a mix of pricing methods. In addition, there is uncertainty on how the algorithm may evolve once it is fed with real life data.

Hence, more work should be undertaken to better understand whether a finding of likely tacit collusion under testing conditions would mean more likely collusion in real markets (i.e. to assess the extent of false positives). Such work would have to determine what would constitute proper testing conditions, and how close testing outcomes are to possible outcomes in real markets. Ultimately, there should be an objective line that would help to sort “bad” algorithms from “good” ones. In the same vein, more work should be undertaken to assess whether the absence of significant likelihood of tacit collusion in a testing environment would make the occurrence of tacit collusion a non-significant risk in real markets (i.e. to assess the extent of false negatives).

A second difficulty lies in the risk that the testing is done so as to lead to an outcome of “no risk of collusion.” Indeed, the algorithm developers could internalize the testing so that there would be an appearance of limited likelihood of tacit collusion. There are various ways to try to address that risk beyond the institutional and regulatory setting already discussed: liabilities could be created, along with auditing and explanation obligations could be created. It remains to be seen whether these are technically realistic.

Regarding the deployment phase, there are again two types of obstacles to the monitoring principle. The first obstacle is technical: that would require the design and operation of the algorithm to make it possible to track the decision making of the algorithm. The Commission has stressed the importance of auditing in its recent white paper.30 It is not obvious that this could today be the case.

The second obstacle is one of identification. Indeed, while the ex ante testing may somehow alleviate the burden of proof that competition authorities must face when proving a tacit collusion, such cases of possible tacit collusions still need to be identified. As pointed out above, this is not an easy task and it cannot be excluded that as for classical tacit collusion, the error cost may be such that only extreme cases could be identified with enough certainty. Another route would be to get inspiration from the financial markets where the use of algorithms for trading instructions is routine. To avoid markets to get into some kind of resonance or being manipulated, market regulators have implemented a number of rules. For instance, MiFID 2 introduced rules on algorithmic trading and high frequency trading that aimed at avoiding the emergence of risks and facilitating their identification.

 

V. CONCLUSION

Although the extent of the risks that pricing algorithms entail for competition is still uncertain, possible ways to tackle this issue have to be foreseen in advance, in case this risk materializes. In this regard, despite their respective drawbacks, dedicated regulation and competition law enforcement adaptations combined could provide a competition policy answer. However, significant difficulties still lie ahead.

Hence, a first option would consist of mandating through dedicated regulation an accountability mechanism for algorithmic pricing. This would create an informational basis that would enable competition rules to possibly address algorithmic tacit collusion with a lower evidentiary threshold than the usual tacit collusion. If the uncertainty linked to the legal basis remains too high, then a dedicated legislation setting out principles-based rules or an authorization regime might be necessary – in case there is enough evidence that algorithmic pricing does facilitate tacit collusion of course. The choice between the two types of regimes should be based on the prediction value of the algorithm testing and the strength of monitoring tools. There remains significant work to be undertaken on these two aspects.

Of course, these thoughts are no substitute to the work that still needs to be done on how to build a testing procedure that is closer to real market conditions and on whether the risk associated with the use of algorithms in real markets is so high as to justify more regulation.


1 Respectively economist, chief economist, and vice-president at the French Competition Authority. The views presented in this paper are those of the authors only. They are not meant to reflect those of the French Competition Authority.

2 See for instance Ezrachi, A. & Stucke, M. E. (2016). Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Harvard University Press; Harrington, J. (2018). Developing Competition Law for Collusion by Autonomous Artificial Agents. Journal of Competition Law & Economics, Vol 14(3), pp. 331–363; or Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019a). Algorithmic Pricing: What Implications for Competition Policy? Review of Industrial Organization. Vol 55, pp. 155-171.

3 Competition & Markets Authority (2018), Pricing Algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing; Autorité de la Concurrence and Bunderkartellamt (2019). Algorithms and Competition; or Autoridade da Concurrência (2019), Digital ecosystems, Big Data and Algorithms. Also see the paper by the President of the Bundeskartellamt in this issue.

4 Tesauro, G. & Kephart, J. O. (2002), Pricing in Agent Economies Using Multi-Agent Q-Learning, Autonomous Agents and Multi-Agent Systems, Vol 5(3), pp. 289 et seq.; Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019a). Algorithmic Pricing: What Implications for Competition Policy? Review of Industrial Organization. Vol 55, pp. 155-171.

5 Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019b). Artificial Intelligence, Algorithmic Pricing and Collusion (https://ssrn.com/abstract=3304991).

6 Klein, T., Autonomous Algorithmic Collusion: Q-learning Under Sequential Pricing, Amsterdam Law School Research Paper, 2019.

7 Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019b). Artificial Intelligence, Algorithmic Pricing and Collusion.

8 Hansen, K., Misra, K. & Pai, M. (2020). Algorithmic Collusion: Supra-competitive Prices via Independent Algorithms, CEPR Discussion Paper Series.

9 Kühn, K.-U. and Tadelis, S. (2017). The (D)anger Behind Algorithmic Pricing, Mimeo.

10 Schwalbe (2018), Algorithms, Machine Learning, and Collusion, Journal of Competition Law & Economics, Vol. 14(4), pp. 568 et seq.

11 In their report, the Autorité de la Concurrence and the Bundeskartellamt distinguish descriptive algorithms, whose strategy and actions can be understood by analyzing the code or a description of the algorithm, from black-box algorithms, whose behaviour is hardly interpretable for humans.

12 Experimental studies often consider Q-learning algorithms, a particular class of reinforcement learning algorithms that could be classified as a black-box algorithm.

13 Competition & Markets Authority (2018), Pricing Algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing.

14 Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019b). Artificial Intelligence, Algorithmic Pricing and Collusion.

15 Hansen, K., Misra, K. & Pai, M. (2020). Algorithmic Collusion: Supra-competitive Prices via Independent Algorithms, CEPR Discussion Paper Series.

16 Autorité de la Concurrence & Bunderkartellamt (2019). Algorithms and Competition.

17 Schwalbe (2018), Algorithms, Machine Learning, and Collusion, Journal of Competition Law & Economics, Vol. 14(4), pp. 568 et seq.

18 See for instance the results from Crandall, J. W., Oudah, M., Tennom, Ishowo-Oloko, F., Abdallah, F., Bonnefon, J.-F., Cebrian, M., Shariff, A., Goodrich, M. A., & Rahwan, I. (2018). Cooperating with machines, Nature Communications. Vol. 9, 233.

19 See the discussion in Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019a). Algorithmic Pricing: What Implications for Competition Policy? Review of Industrial Organization. Vol 55, pp. 155-171.

20 Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. (2019a). Algorithmic Pricing: What Implications for Competition Policy? Review of Industrial Organization. Vol 55, pp. 155-171.

21 See Fjeld J. & Nagy A. (2020). Principled Artificial Intelligence, Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI. Berkman Klein Center for Internet & society.

22 Ezrachi, A. & Stucke, M. E. (2016). Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Harvard University Press.

23 For instance, based on the results of the experimental literature, Q-learning algorithms could be banned because they have been found to lead to tacit collusion. It should however be noted that Crandall et al. (2018) find that a lot of other types of algorithms have higher tendencies to cooperate than Q-learning algorithms.

24 Lamontanaro, A. (2020). Bounty Hunters for Algorithmic Cartels: An Old Solution for a New Problem, Fordham Intellectual Property, Media and Entertainment Law Journal, Vol 30(4), pp. 1259 et seq.

25 Marty, F., Harnay, S. & Toledano, J. (2019) Algorithmes et décision concurrentielle : risques et opportunités. Revue d’Economie Industrielle, Vol. 166, pp. 91 et seq.

26 Case 40/73 Suiker Unie, ECLI:EU:C:1975:174, paras. 26 and 174.

27 European Commission (2017). Algorithms and Collusion – Note from the European Union. OECD.

28 It could be argued that a situation of tacit collusion could be covered by the concept of collective dominance. After all, the tests set out in the EU merger guidelines and in the case of an exchange of information are very similar. In the case of similar algorithms being used by competitors, it would make sense. However, an abuse still needs to be identified. There again, creatively, one could imagine that the reliance on pricing algorithms could lead to excessive prices and be captured by an exploitative abuse.

29 A few examples of questions: What level of supra-competitive prices/profits constitutes an abuse? Relative to which benchmark? For what period of time? What happens if tacit collusion is only reached by a small number of competitors active in the market?

30 European Commission (2020). White Paper on Artificial Intelligence – a European Approach to Excellence and Trust. In particular the Commissions stresses that requirements for ‘high-risk’ AI applications should address: training data; data and record-keeping; information to be provided; robustness and accuracy; human oversight; and some specific requirements for certain particular AI applications.