Before asking whether algorithmic bias is a competition concern, we might need to understand what bias is and how it is exhibited in algorithms. Bias in people is well known and likely inevitable, raising the question of what we can do if algorithms learn to be bias. Artificial intelligence (“AI”) algorithms are those that can raise issues of bias as they learn from past data, and we might have to deal with historical bias situations or unrepresentative or insufficient data. AI algorithms are built by software developers, who can also be biased. Companies are increasingly using AI algorithms to compete more effectively. It is fundamental that antitrust agencies tackle anticompetitive practices performed by means of algorithms, which might imply algorithmic bias. Bias is a broad term and exclusive practices are likely to increase bias in consumers. Therefore, antitrust agencies can be critical in addressing issues related to algorithmic bias. However, a more important question remains unresolved if we cannot explain why and how an AI algorithm is biased in the first place.

By Giovanna Massarotto[1]

 

I. INTRODUCTION

Before asking whether algorithmic bias is a competition concern, we might need to understand what bias is and how it is exhibited in algorithms. Bias in people is well known and likely inevitable, raising the question of what we can do if algorithms learn to be bias. Artificial intelligence (“AI”) algorithms are those that typically raise issues of bias as they learn from past data, and we might have to deal with historical bias situations or unrepresentative or insufficient data. AI algorithms are built by software developers, who might also be biased. Companies are increasingly using AI algorithms to compete more effectively. It is fundamental that antitrust agencies tackle anticompetitive practices performed by means of algorithms, which might imply algorithmic bias. Bias is a broad term and exclusive practices are likely to increase bias in consumers. Therefore, antitrust agencies can be critical in addressing issues related to algorithmic bias. However, a more important question remains unresolved if we cannot explain why and how an AI algorithm is biased in the first place.

 

II. WHAT IS (ALGORITHMIC) BIAS?

Antitrust deals with competitive concerns in markets. Thus, in the algorithmic bias debate we might ask the question: Is bias a competition concern?

The most correct answer is: “it depends.” First, we should clarify what we mean by bias and understand if an algorithm can be biased and how.[2]

In 1950, Alan Turing, the father of computer science, asked the question “Can machines think?” arguing that this question was “too meaningless to deserve a discussion,”[3] because it depends on what we mean by thinking. Similarly, if we ask the question “can humans fly?” the answer relies on what we mean by “flying.”[4] When we take a plane, in a certain way we fly. However, this flying activity is different from that of birds, like eagles. Therefore, in investigating the meaning of bias, we need to address the issue of what we mean by bias, considering the context in which bias is analyzed. We usually use the word “bias” to refer to racial bias or gender bias. However, bias typically describes a wide range of behavior which can be harmful for different reasons and in different ways.[5] The Black’s Law Dictionary defines bias as “inclination; bent; prepossession; a preconceived opinion; a predisposition to decide a cause or an issue in a certain way, which does not leave the mind perfectly open to conviction.” It is considered different from “prejudice.” “Bias is a particular influential power, which sways the judgment; the inclination of the mind towards a particular object.”[6]

Thus, bias is a very serious concern. It is easy to imagine people that are influenced to lean towards a specific direction, “which does not leave the mind open to conviction;” thus bias in people. Before the digital age, bias in the news was well known. In the popular book “Manufacturing Consent,” Professor Noam Chomsky and Edward S. Herman describe the bias phenomenon connected to the media. How the newspaper we choose to read selects the news it reports on, de facto generates bias in how and what we think. There are hundreds of wars every day. We typically know about one, or a few wars. How the news is selected can affect our “judgment.” Advertising might cause bias in people for similar reasons. Bias is often unintentional, but it is present because it seems unrealistic to think that we can know about everything that is happening in the world or we know about certain products without proper advertisement. Thus, bias as something that affects our conviction seems inevitable in human beings.

What about algorithms? An algorithm is generally defined as a set of instructions in which there is some input to obtain certain output. The first non-trivial algorithm was Euclidean, a method for calculating the greatest common divisors between two integers. It seems hard to believe that Euclidean or similar algorithms can be biased. These algorithms are quite straightforward. The issue of bias particularly concerns AI algorithms, which are algorithms trained with a large amount of data to build models that make predictions related to the information of interest.[7] In AI algorithms, while the input is known, the output is usually unpredictable. Deep Learning (“DL), for example, is an AI method, relying on complex neural network architectures inspired by the human brain.[8] ChatGPT is a symbolic example of an AI system, which adopts deep learning techniques. These methods can build models that are very good for predicting the behavior of a system, but in turn are often very bad with explaining why the model predicts a certain behavior, challenging its validity.[9] The interpretation of AI results is becoming increasingly challenging due to the sophistication of these models and methods. Therefore, in addition to data, the design choice is important and needs to be considered in assessing an AI system and potential for bias, because some design choices in algorithms “are better than others.”[10] For example, the selection of features, as well as the algorithmic assumptions can determine bias in AI models.[11] Software engineers typically make these choices when they build AI systems. Although engineers’ decisions are led by technical reasons, it is challenging to think that we can ensure with certainty that these engineers are fully unbiased.[12] In other words, similar to human bias, algorithmic bias represents a concrete concern, and it seems inherent in the fact that these algorithms learn from past data and software developers who build AI systems can make technical choices that can generate bias. Above all, we need to consider that we might fail to understand why and how the algorithm made certain predictions in the first place.

In summary, AI algorithms rely on data to work and are becoming increasingly sophisticated, thus rendering the interpretation of their results difficult. Data quality and selection are essential to the AI system’s performance, as much as the algorithm design. Both data used to train AI algorithms and the algorithm design are not “impartial”[13] and relevant from a bias perspective. However, how to deal with algorithmic bias might be challenging as we often cannot explain an AI system’s results by detecting bias and its cause in the first place.

To make the discussion more intuitive, we can use the popular ProPublica study dating back to 2016, which examined Compas, an algorithm adopted by the U.S. legal system to facilitate judicial decision making.[14] Compas was trained to specifically assist judges in the U.S. in deciding whether a defendant was likely to re-offend while the trial was pending. The problem was that Compas “was found to be biased against African-Americans.”[15] Nicol Turner Lee, Paul Resnick and Genie Barton noted that “[i]n the COMPAS algorithm, if African-Americans are more likely to be arrested and incarcerated in the U.S. due to historical racism, disparities in policing practices, or other inequalities within the criminal justice system, these realities will be reflected in the training data and used to make suggestions about whether a defendant should be detained. If historical biases are factored into the model, it will make the same kinds of wrong judgments that people do.”[16] Moreover, data might be insufficient or unrepresentative. If data used to train the algorithm represents one group more rather than others, this disproportion is likely to be reflected in the model and AI, therefore leading to bias at scale. In addition, algorithm design choices are also “not impartial.”[17]

Thus, the primary question now becomes whether it is possible to prevent bias in AI algorithms and how. Being aware of this risk certainly represents the first important step. The second obvious step is to analyze the phenomenon in a way that we can diagnose and limit bias in AI algorithms effectively. Many studies have been conducted to develop a regulatory framework that mitigates the risk of bias in algorithms, which potentially can occur at large scale. In Europe, the Artificial Intelligence Act mainly aims to strengthen rules around data quality, transparency, human oversight, as well as accountability.[18] AI systems would be classified based on their related risks from safety to fundamental rights of a person. AI applications “considered a clear threat to the safety, livelihoods and rights of people will be banned.”[19] In the U.S., the approach is more about adapting the existing legal framework to AI and investing in “infrastructure for mitigating AI risks.”[20] As outlined above, creating AI systems that are unbiased seems very challenging, but reducing the risk of algorithmic bias is certainly in the crosshairs of legislators all over the world.[21]

Now, the issue is whether algorithmic bias affects competition and how antitrust enforcers might assist in limiting algorithmic bias. In other words:

 

III. IS ALGORITHMIC BIAS A COMPETITION CONCERN?

Although at first glance algorithmic bias might not seem to be a strict competition issue, algorithms are interesting from a competitive perspective for several reasons, including their potential to be biased. Consider the example of companies like Amazon and Google, which use algorithms to perform basically all activities and to compete in markets. Companies are increasingly using algorithms to do what they regularly do more efficiently, and developing the best algorithm for a company often implies winning a market. AI algorithms are general purpose technologies, and they can be implemented in different contexts and situations. AI algorithms are widely used, for example, in the advertising, search and media industries. Therefore, creating algorithms has become increasingly important for any company, regardless of the market, to compete and remain relevant.

Algorithms are often developed to provide recommendations; to automate the distribution and allocation of the demand and supply; to set or recommend a price; to monitor or filter information; to aggregate data and communicate with consumers and businesses; and they can create bias at scale.[22]

The OECD has recently released a study on algorithmic competition with a focus on algorithmic recommendation, search, allocation, pricing and monitoring algorithms, which considers the number of antitrust agencies’ papers and reports drafted on these issues.[23] Because algorithms have become one of the main tools for companies to compete, it is fundamental that antitrust agencies understand and analyze these algorithms.[24] Antitrust agencies need to ensure that companies are not using algorithms to engage in anticompetitive conduct, including price fixing and exclusive conduct. This does not seem to be an option. It is essential for antitrust to remain relevant in the present data-driven economy run by algorithms.

The Competition and Markets Authority (“CMA”) has been particularly active in this field having a Data, Technology and Analytics (“DaTA”) unit dedicated to these issues since 2018.[25] Many antitrust agencies are following the CMA’s DaTA unit model by creating a dedicated unit focused on data and algorithmic matters.[26]

Antitrust studies on algorithmic competition clarified that algorithms enhance consumer welfare by increasing products and services qualities.[27] In 2018, a study of the OECD revealed that pricing algorithms based on AI techniques can benefit consumers significantly. UberX, for example, matched drivers to consumers seeking rides by means of a real-time pricing algorithm and a study estimated that this service generated a consumer surplus of $2.9 billion in four U.S. cities.[28] On the other hand, it has been observed that algorithms might reduce competition by favoring collusion or exclusionary and exploitative conduct. The attention of antitrust regulators has been focused on self-preferencing, autonomous tacit collusion, algorithmic pricing and algorithmic tying and bundling cases. It is not surprising that algorithms can perform all these practices, being a set of instructions with input to generate specific output. For example, vertical integrated digital platforms have raised the issue of so-called “intermediation bias” by potentially using their algorithm to favor their own products over those of a competitor.[29] The Google shopping case is a symbolic example.[30] Therefore, the issue of bias is vibrant in this antitrust discussion as if companies can use algorithms to engage in self-preferencing, collusion or excluding rivals more efficiently, they can de facto limit consumers’ choices in a way that can lead to bias.

Antitrust agencies have a great responsibility, being the first arm of government regulation that can enforce competition principles in any markets by imposing remedies regulatory in nature, while Congress enacts a new law or sets up a new ad hoc regulatory agency.[31]

Considering algorithms for pricing decisions, recent studies have revealed the lack of “comprehensive data of firms using algorithms and AI for pricing purposes.”[32] The studies available seem to show that algorithms used to monitor competitors’ prices are quite uncommon and the end price is rarely adjusted automatically by an algorithm after having considered the other companies’ prices. The same applies for personalized pricing. On the other hand, the risk that companies use algorithms to tacitly collude seems to rapidly increase.[33] However, conscious parallelism, which economists call “tacit collusion,” generally is not considered unlawful in itself. If we consider exclusive conduct, including tying and unbundling, by means of algorithms, this seems perfectly plausible and, as in a non-algorithmic situation, it needs to be assessed case by case.

Several techniques to examine algorithms’ design and functioning exist. Algorithmic auditing and reverse engineering seem to be the most promising methods to assess whether the algorithm can lead to an anticompetitive behavior and potentially increase the risk of bias in algorithms. Several legislations proposed to mandate algorithmic impact assessment or audit provisions to ensure a trustworthy development in AI by having consumer protection and welfare as the main goal. On the other hand, antitrust agencies can impose similar obligations on large market players that are using algorithms to limit competition and harm consumers by increasing transparency and AI accountability, without the need to wait for a new law.[34] Therefore, antitrust can be critical in addressing algorithmic issues by investigating such issues and finding solutions with large players.

However, while several issues related to bias can be addressed through audit provisions and mandate algorithmic impact assessments, there seems to remain an unresolved challenge for legislators and antitrust enforcers: How do we tackle bias effectively in algorithms in which we cannot even explain if and why there is bias in their results? This seems an important question that goes beyond algorithmic bias by challenging the foundations of our present scientific method.[35]

Will algorithms learn to be unbiased autonomously? They might do, but it seems important that we understand how they are capable of doing so. The alternative of using the algorithm blackbox to justify what remains unknown does not seem to be an effective solution.

 

IV. CONCLUSION

In summary, antitrust agencies can potentially set the tone for future AI development by requiring relevant players to ensure certain standards of fairness in the algorithms’ decision making process to preserve competition in specific circumstances. Transparency obligations seem to be particularly important to achieve this end. Therefore, how antitrust agencies enforce competition principles in the context of algorithms can affect “algorithmic bias.” However, we still have little technical comprehension of certain AI models and what they can predict. Thus, although critical, antitrust enforcement action might not be sufficient in addressing a more important question. Should we allow the adoption of algorithms whose results we cannot explain? Is this the start of a new scientific revolution or an old problem with an easy solution?


[1] Academic Fellow, Center for Technology, Innovation and Competition, University of Pennsylvania Carey Law School.

[2] The problem of meaning in language is a fundamental issue belonging to the linguistic domain, which is extremely relevant in any discussion related to natural language processing, thus artificial intelligence (“AI”). Considering that AI algorithms are those that raise bias concerns, starting from a linguistic question seems to be extremely pertinent. It brings us back to the origin of the modern computer and the first AI algorithms. See e.g. Bennison Gray, The Problem of Meaning in Linguistic Philosophy, 59 Logique et Analyse 609 (1972).

[3] Alan M. Turing, Computing Machinery and Intelligence, 59 Mind 433 (1950); Noam Chomsky, Turning on the “Imitation Game,” in Passing the Turning Test (Richard Epstein, Gary Roberts & Grace Beber eds., Springer, 2009).

[4] Noam Chomsky, Chickens fly like eagles. Humans don’t fly at all (May 17, 2017), Inframethodology, https://blog.cbs.dk/inframethodology/?p=568.

[5] Sun Lin Blogdgett, Solon Barocas, Hal Daume’ III & Hanna Wallach, Language (Technology) is Power: A Critical Survey of “Bias” in NLP, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 5454 (2020).

[6] The Law Dictionary, BIAS Definition & Legal Meaning (Black’s Law Dictionary, 2nd ed.), https://thelawdictionary.org/bias/#:~:text=Inclination%3B%20bent%3B%20prepossession%3A%20a,mind%20perfectly%20open%20to%20conviction.

[7] See Tom Mitchell, Machine Learning 2 (1997) (“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience.”).

[8] See Yann LeCun et al., Deep Learning, 521 Nature 436 (2015).

[9] Tomaso Aste, What Machines Can Learn About Our Complex World – and What Can We Learn From Them? 7 (Mar. 4, 2021), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797711.

[10] Sara Hooker, Moving Beyond ‘Algorithmic Bias is a Data Problem’, 2 Patterns 1 (2021). See also, Nicole Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms, Brookings Report (May 22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/. (“Turner Lee has argued that it is often the lack of diversity among the programmers designing the training sample which can lead to the under-representation of a particular group or specific physical attributes.”) Id.

[11] Drew Roselli, Jeanna Matthews & Nisha Talaga, Managing Bias in AI, Companion Proceedings of the 2019 World Wide Web Conference 539, 541 (2019).

[12] See Lee, Resnick & Barton, supra note 10. (“Turner Lee has argued that it is often the lack of diversity among the programmers designing the training sample which can lead to the under-representation of a particular group or specific physical attributes.”) Id. See also, Bo Cowgill & Catherine Tucker, Algorithmic Fairness and Economics (Sep. 24, 2020), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3361280. (“According to the Bureau of Labor Statistics in 2018, software engineers are more white, male, well-educated and better-paid than America as a whole.”) Id. at 6.

[13] Hooker, supra note 10.

[14] See Jeff Larson, Surya Mattu, Lauren Kirchner & Julia Angwin, How We Analyzed the COMPAS Recidivism Algorithm, ProPublica (May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm; Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan & Ashesh Rambachan, Algorithmic Fairness, 108 AEA Papers and Proceedings 22 (2018).

[15] See Lee, Resnick & Barton, supra note 10, at 5. Larson, Mattu, Kirchner & Angwin, supra note 14. “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as low risk 63.2 percent more often than black defendants. Black defendants who were classified as a higher risk of violent recidivism did recidivate at a slightly higher rate than white defendants (21 percent vs. 17 percent), and the likelihood ratio for white defendants was higher, 2.03, than for black defendants, 1.62.” Id

[16] Id.

[17] Hooker, supra note 10.

[18] News European Parliament, AI Act: A Step Closer to the First Rules on Artificial Intelligence, Press Releases (May 11, 2023), https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence.

[19] European Commission, Regulatory framework proposal on artificial intelligence (last update Sept. 29, 2022), https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[20] Alex Engler, The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment, Brookings Report (Apr. 25, 2023), https://www.brookings.edu/research/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/.

[21] See World Economic Forum, The European Union’s Artificial Intelligence Act, explained (Mar. 28, 2023), https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/. See also, the Algorithmic Accountability Act of 2022 in the United States. Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems, Ron Wyden United States Senator for Oregon (Feb. 3, 2022), https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-accountability-act-of-2022-to-require-new-transparency-and-accountability-for-automated-decision-systems.

[22] OECD, Algorithmic Competition, OECD Competition Policy Roundtable Background Note 8-9 (2023), https://www.oecd.org/competition/algorithmic-competition.htm. [OECD Report].

[23] Id. at 7, 10.

[24] Giovanna Massarotto, Why AI and Competition Law Matter?, On-Topic, 3 Concurrences 2 (2021); Giovanna Massarotto, Using Tech to Fight Big Tech, Bloomberg Law (Sep. 27, 2021), https://news.bloomberglaw.com/tech-and-telecom-law/using-tech-to-fight-big-tech.

[25] Stephan Hunt, CMA’s new DaTA unit: exciting opportunities for data scientists, https://competitionandmarkets.blog.gov.uk/2018/10/24/cmas-new-data-unit-exciting-opportunities-for-data-scientists/.

[26] See e.g. Brian Fung, DOJ will hire more data experts to scrutinize digital monopolies, antitrust chief says, Cnn Business (Mar. 6, 2023), https://www.cnn.com/2023/03/06/tech/doj-data-experts/index.html.

[27] See e.g. OECD Report at 10; Antonio Capobianco, The Impact of Algorithms on Competition and Competition Law, ProMarket (May 23, 2023), https://www.promarket.org/2023/05/23/the-impact-of-algorithms-on-competition-and-competition-law/.

[28] Peter Cohen, Robert Hahn, Jonathan Hall, Steven Levitt & Robert Metcalfe, Using Big Data to Estimate Consumer Surplus: The Case of Uber, NBER Working Paper (Sep. 2016), https://www.nber.org/papers/w22627.

[29] See e.g. Richard Feasey & Jan Krämer, Implementing Effective Remedies for Anti-Competitive Intermediation Bias on Vertically Integrated Platforms, CERRE Centre on Regulation in Europe Report 5 (Oct. 2019), available at https://cerre.eu/wp-content/uploads/2020/05/cerre_report_intermediation_bias_remedies.pdf.

[30] European Commission Press Release, Antitrust: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service, (Jun. 27, 2017), https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1784; EU Commission, Google Search (Shopping), AT.39740 (Jun. 27, 2017).

[31] See Giovanna Massarotto, Antitrust Settlements. How a Simple Agreement Can Drive the Economy 75, 145 (Wolters Kluwer, 2019); Giovanna Massarotto, Grasping the Meaning of Big Tech Antitrust Consent, Competition Policy International (Feb. 2020), https://www.competitionpolicyinternational.com/grasping-the-meaning-of-big-tech-antitrust-consent/.

[32] See Capobianco, supra note 27.

[33] Id.

[34] See supra note 31.

[35] See Aste, supra note 9, at 7.