Dear Readers,

Despite first being coined in the 17th century, until very recently the term “algorithm” was obscure jargon rarely used outside the field of computer science. Today, discussion of algorithms is scarcely out of popular news headlines. Such is the increasing sway that algorithms exert over every aspect of modern life. 

At its base, an algorithm is a neutral mechanism, defined as “a step-by-step procedure for solving a problem or accomplishing some end.” Leaving to one side the advanced algorithms used by the tech giants, the processes schoolchildren use to learn addition, subtraction, division, and multiplication are themselves “algorithms,” albeit implemented by pencil and paper. Nonetheless, the amplification of the power of algorithms by vast datasets and computational power risks their use to socially undesirable ends (either intentionally or unintentionally). This is typically the result of introducing a real or perceived bias into either the algorithm itself or the input data upon which it operates.

In the antitrust context, “algorithmic bias” has famously been invoked in cases accusing large tech companies of favoring their own products or services to the detriment of competitors. But this is the tip of the iceberg, as algorithms are embedded by definition into any automated process engaged in by commercial entities, and also increasingly influence aspects of the operation of public bodies (including, potentially, antitrust enforcers). 

Needless to say, the terrain is vast, but the contributors to this Chronicle address several of the most salient issues raised by algorithms at the present moment, from a variety of perspectives.

Appropriately, Giovanna Massarotto opens by asking what “bias” is and how it is exhibited in algorithms. Drawing on linguistics and social sciences, the author underlines that bias in people is well-known and likely inevitable. Therefore, potentially problematic algorithms include those that raise issues of bias as they rely on past data, including historical biases or unrepresentative or insufficient samples. Therefore, while antitrust agencies must be critical in addressing issues related to bias implemented through algorithms, a more important question remains unresolved if we cannot explain why and how an AI algorithm is biased in the first place. 

Addressing the most headline-grabbing instances of the antitrust analysis of algorithmic “bias,” Emilie Feyler & Veronica Postal discuss the stringent rulings by competition authorities against allegedly self-preferencing algorithms used by digital platforms. As the authors note, from a theoretical perspective, such self-preferencing algorithms can have pro-competitive benefits. There is no consensus from the economic literature on whether pro-competitive benefits or possible anti-competitive considerations prevail in the context of such algorithms used by digital platforms. Determining the net impact of recommendation algorithms on competition and consumer welfare requires individualized analysis accounting for the workings of specific algorithms, the competitive context, and the market environment.

Leaving aside these banner cases, Robert Clark & Daniel Ershov examine the impact of algorithmic pricing software to the bread and butter of antitrust enforcement: retail prices for consumers. Most recent academic work has studied this question from a theoretical or experimental perspective. The authors describe the first empirical analysis of the consequences of wide-scale adoption of algorithmic pricing software, focusing on Germany’s retail gasoline market, where, according to trade publications, software became widely available in 2017. The evidence suggests that the use of these algorithms increased margins (and more so in competitive markets), indicating that it may have softened competition.

Looking at the other side of the coin, i.e. enforcers making use of algorithms in their enforcement activities, Holli Sargeant & Teodora Groza examine the balance between the benefits of algorithms in antitrust enforcement and the genuine concerns surrounding bias. They argue that while algorithmic bias should not be ignored, algorithms can be valuable tools when carefully designed, and the overemphasis on bias concerns stems from a lack of technical understanding. The article explores the use of algorithms in law enforcement, highlights the risks of bias, and presents how algorithmic design can mitigate these concerns. By offering a nuanced perspective on the potential and threats of algorithmic tools, the article contributes to the ongoing discourse on the responsible and effective utilization of algorithms in antitrust enforcement.

In the same vein, Sampath Kannan explains how machine learning algorithms are being used to make critical decisions. Eliminating bias in these algorithms and making them fair to groups and individuals is vitally important. The piece reviews common definitions of “fairness” for algorithms, and points a way out of the seeming impasse of choosing between mutually incompatible criteria in some scenarios. The piece identifies what algorithmic decision-making systems can learn from their human counterparts, i.e. rules-of-evidence-type limitations on the kinds of data that may be used, and exercising forbearance in making highly preemptive decisions. Finally, the article describes the need for transparent and accountable machine-learning models in the implementation of any such algorithmic systems.

Finally, and appropriately enough, Paola Cecchi Dimeglio underlines the importance of dialogue and innovation when it comes to correcting any dangerous trends in the use of algorithmic AI. Managing and eliminating algorithmic bias in virtual, augmented, and mixed reality technologies will be critical to this overall success. The responsibility for finding an appropriate balance lies both on businesses themselves and the legal system.

As always, many thanks to our great panel of authors.

Sincerely,

CPI Team

Click here for the full Antitrust Chronicle®.