One of the issues for discussion among scholars of competition law in the digital space is collusion, and the ineffectiveness of competition law to deal with collusion that is driven by algorithms and not humans. This ineffectiveness is based on several assumptions, with the main being that humans, and not algorithms, have the ability to communicate. In this article, I put forward some arguments to challenge this view. I outline how algorithms “communicate,” if they communicate, and discuss if such communication can be perceived as communication for the purposes of competition law. Collusion cannot happen without communication and, therefore, communication is the first essential step for anti-competitive collusion to exist. The manner in which we perceive algorithms and their interactions thus determines whether specific legal requirements of anti-competitive collusion can be applied to algorithmic collusion or not.

By Barbora Jedlicková1

 

In recent years we have seen a rich discussion, legislative proposals and even application of competition law2 with regards to large digital platforms.3 It is reasonable to assume that this is just the beginning of addressing the existing and potential anticompetitive issues in the digital space. Some scholarly works on competition law and the digital economy also discuss other topics, with various types of algorithmic collusion being one of the most prominent. Considering, first, that innovation occurs at enormous speeds in the digital world; second, that one of the typical major enforcement priorities of competition law agencies are cartels and; third, that large digital platforms are just one of the many hot issues of competition law in the digital space, it is only a matter of time before competition law regulators will study algorithmic collusion profoundly.4

When it comes to anticompetitive collusion, the most challenging hurdle for enforcing competition law is having enough evidence to prove anticompetitive conduct. Various competition law regimes require collusion between at least two market participants, with horizontal collusion being recognized as more damaging to competition than vertical collusion.5 Any collusion requires some form of communication and some minimum evidence to prove it.

When the hurdle of proving anticompetitive collusion is combined with the digital space, most notably, collusion driven by algorithms, the hurdle a competition law regime needs to overcome is even more challenging. Anticompetitive algorithmic collusion, in situations where there is no direct human input other than designing and running particular algorithms, represents this hurdle.

The essential evidence for proving anticompetitive collusion, which can be tacit, almost always involves some form of communication.6 The kinds of communication that can assist in proving anticompetitive collusion, as opposed to communication that represents normal market conduct, and therefore the determination of the exact boundaries between competitive and anticompetitive multilateral (or bilateral) conduct are questions which continue to be examined and which involve a rich scope of study. These questions are also present in a digital setting. However, the digital world encompasses another layer of complexity, particularly in situations where collusion is reached via algorithms and not humans. How common this algorithmic collusion is or will be in the future is another area for examination. It is alarming enough for now that it is possible.

I look at this possibility (if not already the reality) from one of the most significant hurdles that proving anticompetitive algorithmic collusion represents: Can algorithms “communicate” through the lens of competition law in order to collude?

In other words, the complexity of algorithmic collusion and their artificial, non-human characteristic mean that, before specific requirements for proving anticompetitive collusion are examined, we really need to reach a consensus, at least within a specific competition-law regime, on whether algorithms can communicate among themselves. If they do not communicate, then they cannot collude within the existing rules and principles of competition law, and they cannot, therefore, infringe the current competition law. If this is the case, then competition law rules governing anticompetitive collusion need to be re-examined in order to address collusive behavior in the digital space. However, if the answer is positive, the current competition-law rules can, at least to a certain extent, apply to algorithmic collusion.

In order to answer the question of whether algorithms can communicate with each other for the purposes of competition law, I will investigate the meaning of the term “communication” from both general and competition-law perspectives, and I will discuss how this “communication” occurs among algorithms.

 

I. WHAT ARE ALGORITHMS? AND HOW DO ALGORITHMS “COMMUNICATE,” IF THEY DO?

Algorithms, being “sets of mathematical step designed to solve specific problems or perform specific tasks,”7 are the essential building blocks of the digital world. The digital world as we know it is about sharing, storing and analyzing information, and all of these functions are possible thanks to algorithms. If the digital world is about information then algorithms are as well.

The most important function of the digital world for the purposes of determining the existence of anticompetitive collusion is sharing, in other words exchanging information, followed by its analysis. This exchange of information can occur either through direct human input, typically exchanging emails, messages, digital meetings etc., or with indirect input, where specific programs and functions of the digital world, their algorithms, are constructed in such a way that they exchange and analyze information (in other words, data) in the digital space. In this situation, direct human input is replaced with algorithmic “acting.” This acting can be the result of artificial intelligence (hereinafter, AI), which includes machine learning and deep learning.8

Learning algorithms are designed to make their own decisions by learning from the data they receive or collect. They can also be designed to read other algorithms and can be read by other algorithms – they can read each other’s “minds.”9 Such “communication” can include the way future prices are determined or the way other business strategies are decided. By making it known to each other, AI can be set in such a way as to allow them to then act upon this information in a collusive manner.

The origin of algorithmic “reading of minds” can be linked to the origin of the internet. When the internet was invented, the idea was that this new digital space would be transparent and free to use.10 Thus, the algorithms created for these purposes were also set with this idea in mind; meaning that their reading parameters were usually open or transparent, allowing for the reading of minds of other algorithms.

However, this transparent trend has been modified. In the current digital economy, the digital world generates enormous profits for digital platforms, among other entities. This “partial” shift from the original idea of openness, transparency, sharing and free usage and access to profit generating has also impacted the transparency of algorithms and their reading parameters. Although this ability to “read” is still desirable in some situations and by some platforms,11 others make their reading parameters, and even their data, unavailable to others.

Thus, while Open Web is an example of a free, sharing platform characterized by “visible, findable and linkable” content, interoperability and transparency,12 other, more recent platforms, such as Facebook and Uber, place limits via Application Programming Interfaces (APIs)13 on free sharing, thereby creating “walled gardens.” The API locks both users and app developers into a landscape defined and controlled by the platform, such as Facebook. The increasing use of APIs by platforms has been removing information from the Open Web.14 In general, platformization has been moving the digital world from published URIs and open HTTP transactions to “closed applications that undertake hidden transactions.”15

What does this recent development in the digital space mean for algorithmic “communication” and parallelism? Generally speaking, it means that both algorithmic communication and parallelism are not as easy to achieve in the current digital world.

The more transparent the digital space is, the quicker algorithms react and adjust prices. They can even adjust price and other business decisions, such as output, before a price change occurs simply by precise predictions.16 In the Open Web, where data are available and accessible, but in situations where algorithms’ parameters are not readable, algorithms can “observe” competitors by collecting and analyzing data. With this analysis the algorithms can, for instance, predict a competitor’s next step. This can potentially lead to acting in parallel with competitors’ algorithms. Unlike a true reading of the mind and despite the sophistication of such algorithms, this is just unilateral acting with potential parallelism as a result.

For parallel conduct to arise from the reading of minds by algorithms, we need both true transparency, where data are accessible and available, as well as readable parameters. Without these conditions we cannot link algorithmic features to algorithmic “communication” unless certain algorithms are sophisticated enough and designed in such a way as to learn how to communicate with each other. Recent developments show that this is possible.

Schwalbe discussed and summarized some recent studies on AI learning to communicate by algorithms in his article, “Algorithms, Machine Learning, and Collusion.”17 These studies show that it is possible, due to communication protocols, for algorithms to learn to communicate and thus share communication codes and that there are various ways that algorithmic communication can be achieved. This communication could even be possible without sharing a common communication protocol.18 From a competition perspective, what is most disturbing is “that algorithms can also learn to hide their communication from third parties.”19 This means the algorithms can hide any evidence of communication among themselves.

While these forms of algorithmic communication may not be common as yet, there is nothing to suggest that they will not become more common in the future. Therefore, both forms of algorithmic “communication,” the reading of minds and direct AI communication among algorithms, need to be considered for the purposes of applying competition law to algorithmic collusion.

 

II. WHAT DOES “COMMUNICATION” MEAN? AND IS ALGORITHMIC COMMUNICATION A FORM OF COMMUNICATION?

In the first part of the article, I discussed three general ways for multiple algorithms owned by multiple competitors to reach parallel conduct such as parallel pricing. The first, which does not constitute “communication,” arises due to transparency and AI, where algorithms “observe” competitors’ behavior in the digital space by collecting and analyzing data. The second example occurs where algorithms have readable parameters and thus algorithms can read each other’s minds; something that is not possible in a non-digital, human world (unless you believe in telepathy). Finally, the last example is algorithmic communication via AI.

The second and third methods of algorithmic parallelism could be the subject of current competition law if, among other things, their forms of “communication” are perceived as communication for the purposes of competition law.

Various competition-law regimes use various legal terminologies to capture anticompetitive collusion. As I noted at the beginning of this article, it is usually communication that is an important piece of evidence, besides the existence of parallel behavior, which leads to proving anticompetitive collusion. The EU has a very rich “case law” explaining the term “communication” systematically and clearly for this purpose, giving the term “communication” a wide meaning. It includes not only direct communication but also indirect forms of communication.20 Communication which leads to anticompetitive collusion can be two-sided, or even one-sided (usually in the form of signaling) if it is followed by a parallel action of the recipient of such communication. Similarly, the U.S. has a number of antitrust-law cases where one-sided communication was used as important evidence for proving anticompetitive collusion.21 Therefore, even a situation where one algorithm is readable by (or communicates to) other competitors’ algorithms which then act upon that in a parallel way, could fit the competition-law banner of “communication.”

Under EU competition law, the object or effect of this communication must be “to create conditions of competition which do not correspond to the normal conditions of the market.”22 Such communication also leads to the removal of uncertainties among competitors “as to their future conduct and, in doing so, also eliminated a large part of the risk usually inherent in any independent change of conduct on one or several markets.”23 Surely, communication among algorithms, or reading the minds of other algorithms regarding future conduct, for instance the way future prices are determined, remove uncertainties that would exist without such algorithmic “communication.” It also leads to conditions of the market that are other than normal in situations where such communication does not have a legitimate business reason.

The EU approach to communication used for proving and determining the existence of anticompetitive collusion is well summarized in Suiker Unie, where the Court of Justice of the European Union highlights that the provision concerning anticompetitive collusion, Article 101 TFEU, requires that competitors act independently, explaining that:

Although it is correct to say that this requirement of independence does not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors, it does however strictly preclude any direct or indirect contact between such operators, the object or effect whereof is either to influence the conduct on the market of an actual or potential competitor or to disclose to such a competitor the course of conduct which they themselves have decided to adopt or contemplate adopting on the market.24

Determining whether algorithmic communication and reading of minds should be perceived as direct or indirect forms of communication among competitors is not essential to decide whether both forms of algorithmic “communication” discussed in this article could be covered by the EU approach. It is important to note that the EU approach (like the US approach) includes both direct and indirect forms of communication and that both algorithmic AI communication and reading of minds by algorithms could be perceived as forms of “communication.” The general definition(s) of the term “communication” can either support or dismiss this argument.

Looking at the term “communication” through an historical lens, we can see that its meaning has been constantly evolving: from communicating directly, through written communication via messengers such as pigeons, to introducing the telegraph, then telephone to digital communication. Each new invention which connects people and businesses has enriched the meaning of communication and its forms. The newest form, digital communication, is also subject to constant innovation. Digital communication allows us to communicate directly but also indirectly. The digital world is about information and its exchange and, therefore, claiming that algorithms do not communicate does not make sense in an environment which centers around collecting, exchanging and analyzing information. With the existence of AI in the digital world, it is logical to accept the AI algorithmic communication as a new form of communication and a new step on this evolutionary journey of communication.

The current definition of communication endorses this. For instance, the Merriam-Webster Dictionary refers to communication as “a process by which information is exchanged between individuals through common system of symbols, signs or behaviour.”25 The term “individuals” is not necessarily limited to mean “humans” and, considering recent developments in the digital space, nor should it be. Looking at another definition where communication is explained as “[t]he transmission or exchange of information, knowledge, or ideas, by means of speech, writing, mechanical or electronic media, etc.; (occasionally) an instance of this,”26 we can see how both AI algorithmic communication and algorithmic reading of minds can fit well in such an interpretation. Considering that “telepathy” is defined as a form of communication,27 the same applies to algorithmic reading minds.

 

III. CONCLUSION

How the term “communication” is interpreted is an essential question which assists in the determination of whether a particular competition-law regime could, at least potentially, prohibit algorithmic collusion in situations where it is not humans but their algorithms that collude.

I argue that algorithms communicate among each other if they are programmed to do so, where they learn to communicate and exchange ideas via artificial intelligence or they communicate due to the transparency of their reading parameters, which allows them to read their minds. If algorithms can exchange information and act upon this information, then claiming that “algorithmic communication” is not communication for the purposes of competition law unless humans are directly involved does not make sense.

The same concepts of competition law should apply sufficiently to both the digital and non-digital worlds. The fact that it involves algorithms and their AI should not stop various competition-law regimes from addressing situations where algorithms collude. If we value competition as a great mechanism for enhancing the economy, we need to make sure that competition does not end where the digital world begins.

The ways in which various competition-law regimes address algorithmic collusion in the future will influence the development of algorithmic communication. For instance, if the algorithmic reading of minds is found to be subject to competition law’s usage of the term communication, then this could further decrease the transparency of reading parameters. Taking AI algorithmic communication as a second example of algorithmic communication, if covered by competition-law regimes as a form of communication this could increase the tendency to design algorithms in such a way that they hide their communication, thus making potential anticompetitive algorithmic collusion harder to detect.


1 Senior Lecturer, T C Beirne School of Law, University of Queensland. Email: b.jedlickova@law.uq.edu.au.

2 The connotation “competition law” used in this article means both competition law and antitrust law.

3 For instance, the European Union has two legislative proposals targeting large online platforms, the Digital Markets Act and the Digital Services Act, both submitted to the European Parliament and the European Council on 15 December 2020. The European Commission has made a number of decisions on infringements of EU competition law with regards to Big Tech companies. The Australian Competition and Consumer Commission (“ACCC”) has conducted several digital-platforms inquiries. The inquiries commenced with the ACCC being directed by the Treasurer to conduct the Digital Platforms Inquiry on December 4, 2017, followed by the Digital Advertising Services Inquiry and Digital Platform Services Inquiry, both announced in February 2020. The recent Australian bill (the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020), aims to address imbalances in bargaining power between digital platforms and news media.

4 Competition law regulators and organizations have been noticing this topic. For instance, the Organisation for Economic Co-operation and Development (“OECD”) studied algorithmic collusion, publishing a report on this topic in 2017. (OECD, Algorithms and Collusion: Competition Policy in the Digital Age, 51 (2017), available at http://www.oecd.org/daf/competition/Algorithms-and-colllusion-competition-policy-in-the-digital-age.pdf).

5 Some competition law regimes classify vertical restrictions as forms of unilateral conduct (for instance, Australia), while other competition/antitrust law regimes require a proof of anticompetitive collusion (for instance, EU, U.S.).

6 Unless, typically, the circumstantial evidence is based on the market “structure” itself. This was the case in American Tobacco Co. v. U.S. 328 U.S. 781, 66 S.Ct. 1125 (1946). Professor Page analyzed anticompetitive tacit agreements and evidence to prove them under Section 1 of the Sherman Act (1890) in the U.S. in his article “Tacit Agreement Under Section I of the Sherman Act,” (William H. Page, Tacit Agreement Under Section I of the Sherman Act, 81 (2017) ANTITRUST L.J. 593.) where he analyzed U.S. cases on tacit agreements. He proposed that these agreements are determined and proven in situations where there is relevant communication and then competitors act upon this in a parallel manner (at 608).

7 B. Jedlickova, “Digital Polyopoly,” (2019) 42(3) World Competition, 309, p. 315.

8 The various forms of algorithmic AI have been well explained in several scholarly works on algorithmic collusion. For instance, see, M. S. Gal, “Algorithms as Illegal Agreements,” (2019) 34(1) Berkeley Tech. L.J. 67, pp. 77-92.

9 See, e.g. Von Neumann, First Draft of a Report on the EDVAC, reproduced in Origins of Digital Computers: Selected Papers 383 (Brian Randel led 1982).

10 Ibid.

11 For instance, Plantin et al notes that the success of platforms such as Apple’s iOS and Google’s Android comes from “attracting many independent actors to contribute to their software ecologies, instead of attempting to build and market stand-alone products.” Jean-Christophe Plantin et al, “Infrastructure studies meet platform studies in the age of Google and Facebook,” (2018) 20(1) New Media & Society, 293, 298.

12 The definition of the term “platform” is not absolutely unified, with some experts refereeing to “Open Web” as a platform. See, e.g. Jean-Christophe Plantin et al, “Infrastructure studies meet platform studies in the age of Google and Facebook,” (2018) 20(1) New Media & Society, 293.

13 “An API is an interface provided by an application that lets users interact with or respond to data or service requests from another program, other applications, or Websites. APIs facilitate data exchange between applications, allow the creation of new applications, and form the foundation for the “Web as a platform” concept.” (Anne Helmond, “The Platformization of the Web: Making Web Data Platform Ready,” (2015) 11(1) Social Media + Society, 2, 4.)

14 Jean-Christophe Plantin et al, “Infrustructure studies meet platform studies in the age of Google and Facebook,” (2018) 20(1) New Media & Society, 293, 303.

15 Ibid.

16 See, e.g. A Ezrachi and M. E. Stucke, Virtual Competition – The Promise and Perils of the Algorithm Driven Economy (Harvard University Press 2016), pp. 72-73.

17 Ulrich Schwalbe, “Algorithms, Machine Learning, and Collusion,” (2019) 14(4) Journal of Competition Law & Economics, 568, at 594-596.

18 Ibid., p. 595, referring to S. Barrett et al., “Making Friends on the Fly: Cooperating with New Teammates,” (2017) 242 Artificial Intelligence, 132.

19 Ibid., referring to M. Abadi & D.G. Anderson, “Learning to Protect Communications with Adversarial Neural Cryptography,” Working paper, Google Brain, available at https://arxiv.org/pdf/1610.06918v1.pdf.

20 See, e.g. Case 40 to 48, 50, 54 to 56, 111, 113 and 114/73 European Sugar Cartel, re; Coöperatieve Vereniging ‘Suiker Unie’ UA v. Commission [1975] ECR 1663, at ¶ 4; Cases T-25-26/95 etc. Cimenteries CBR SA and Others v. E.C. Commission, ECLI:EU:T:2000:77, at ¶ 87.

21 For instance, Interstate Circuit v. U.S. 306 U.S. 208, 59 S.Ct. 467 (1939); U.S. v. Foley 598 F.2d 1323 (1979); In re Coordinated Pre-trial Proceedings in Petroleum Products Antitrust Litigation 906 F.2d 432 (1990).

22 Cases T-25-26/95 etc. Cimenteries CBR SA and Others v. E.C. Commission, ECLI:EU:T:2000:77, at ¶ 87 (emphasis added). Also see, e.g. Case C-8/08, T-Mobile Netherlands BV v. Raad van bestuur van de Nederlandse Mededingingsautoriteit, ECLI:EU:C:2009:343, at ¶ 33; 114/73 Suiker Unie, at ¶ 4.

23 Case 48, 49, and 51-57/69 Imperial Chemical Industries Ltd. v. Commission of the European Communities ECLI:EU:C:1972:70 (Dyestuffs), at ¶ 101, emphasis added, also see, at ¶ 112, 119.

24 114/73 Suiker Unie, at ¶ 4, 173 (emphasis added); also see, e.g. Dyestuffs, at ¶ 10; C-49/92 P, Commission v Anic Partecipazioni, EU:C:1999:356, at ¶ 116.

25 Merriam-Webster Dictionary (online edition).

26 Oxford English Dictionary (online edition).

27 See, e.g. Cambridge English Dictionary (online edition); Oxford Learner’s Dictionaries (online edition).