In the digital age, the intersection of data, technology, and antitrust enforcement has brought algorithms into focus as potential tools for uncovering anticompetitive practices and improving decision-making. However, concerns about algorithmic bias have raised questions about their use in this critical field. This article examines the balance between the benefits of algorithms in antitrust enforcement and the genuine concerns surrounding bias. It argues that while algorithmic bias should not be ignored, algorithms can be valuable tools when carefully designed, and the overemphasis on bias concerns stems from a lack of technical understanding. The article explores the use of algorithms in law enforcement, highlights the risks of bias, and presents how algorithmic design can mitigate these concerns. It then delves into the specific context of antitrust enforcement, explaining why the problem of algorithmic bias is less relevant compared to other regulatory areas. By offering a nuanced perspective on the potential and threats of algorithmic tools, the article contributes to the ongoing discourse on the responsible and effective utilization of algorithms in antitrust enforcement.
By Holli Sargeant & Teodora Groza[1]
I. INTRODUCTION
In the digital age, where data and technology permeate every facet of our lives, the field of antitrust enforcement finds itself at a crossroads. As policymakers and regulators grapple with the ever-evolving landscape of digital marketplaces, the spotlight has turned toward the power of algorithms to enforce antitrust rules. Algorithms have the unique potential to empower antitrust enforcement by unearthing anticompetitive practices, predicting market trends, and enabling more informed and economics-based decision-making. However, concerns about algorithmic bias have cast a shadow over their use in this critical domain.
This article explores the delicate balance between the benefits of algorithms in antitrust enforcement and the genuine concerns surrounding algorithmic bias. While algorithmic bias, the potential for algorithms to discriminate or perpetuate unfair outcomes, should not be dismissed, it is essential to approach the topic with a nuanced understanding. Despite concerns about algorithmic bias, we show that algorithms are a valuable tool for antitrust enforcement, and a lack of technical understanding has led to an overinflation of these concerns.
The article is structured as follows. It begins with an overview of the academic literature on the use of algorithms in law enforcement and of the concerns raised in this literature about the potential effects of algorithmic bias. Then, it engages with the literature on the potential of algorithms to improve law enforcement, explaining how concerns over algorithmic bias can be mitigated through careful algorithmic design. Leveraging these insights, it gives an overview of the use of algorithms in antitrust enforcement and explains that concerns over algorithmic bias are less relevant in antitrust as opposed to other fields of law due to certain particularities of antitrust laws.
The article makes three principal contributions. First, it offers a nuanced account of the potential and threats of leveraging algorithmic tools in law enforcement by introducing some critical aspects of machine learning that unveil some of the misconceptions about algorithmic bias. Second, it explains that concerns over algorithmic bias are less relevant to antitrust enforcement than other regulatory fields.
II. MAPPING THE DEBATE
As a recent report by the European Agency for Fundamental Rights put it, “AI is everywhere and affects everyone.”[2] Artificial intelligence (“AI)” tools have become commonplace in law enforcement. These methods are primarily machine learning (“ML”) approaches, a subset of AI, which is the focus of this article. Tax and construction authorities, social security agencies, and antitrust authorities worldwide increasingly rely on ML to perform their duties. A significant strand of the existing literature on the use of ML in law enforcement highlights the threats posed by new technologies in terms of enhancing existing biases and bolstering the discriminatory practices resulting from there. Nonetheless, algorithmic tools provide a significant opportunity for law enforcement and, if carefully designed, can mitigate human biases, increase efficiency, and ultimately lead to a shift from the current reactionary approach to regulation to an adaptive one.[3]
This section maps out the use of algorithms in law enforcement and explains the risks associated with algorithmic bias. Building on these, it introduces its key argument that well-designed ag tools have a game-changing potential for law enforcement.
A. The Use of Algorithms in Law Enforcement
In recent times, there has been a surge in the application of algorithms in different aspects of law enforcement. Although, it is often difficult to ascertain what algorithms are being built and deployed by law enforcement. One significant area where this trend is evident is in the adoption of algorithmic systems in criminal law enforcement practices. For example, the adoption of algorithmic systems in law enforcement practices presents a new dimension of crime prevention and detection. These data-driven models utilize vast arrays of information, predicting potential crime hotspots, identifying potential offenders, and even assisting in decision-making processes.[4] Algorithmic models are also used in the judicial enforcement of criminal bail and sentencing decisions.[5]
Beyond criminal law enforcement, algorithms have also found utility in the administration of social welfare programs. The Australian Government used an automated debt recovery program, known as “Robodebt,” to identify any discrepancies between an individual’s declared income to the Australian Taxation Office and the individual’s income reported to Centrelink (the social welfare department).[6] The algorithm faced considerable backlash for inaccuracies, which resulted in unjustified debt notices, leading to widespread public distrust of government use of algorithmic tools.[7]
The utilization of algorithms in law enforcement highlights both the potential benefits and significant challenges associated with algorithmic enforcement. While these models hold promise to enhance efficiency and effectiveness, ensuring transparency, fairness, and accountability is crucial to prevent unintended consequences and preserve public trust in algorithmic decision-making processes.
B. What is Algorithmic Bias?
Algorithmic bias refers to the potential for algorithms to discriminate against certain individuals or perpetuate unfair outcomes based on factors such as race, gender, or other characteristics.[8] In simple words, it covers situations when “AI makes decisions that are unfair to certain groups.”[9] Nonetheless, unlike humans, AI has neither intentionality nor biases: This means that algorithms cannot be independently inclined toward certain outcomes and do not prefer certain characteristics at the expense of others. The issue of algorithmic bias arises where “[algorithmic] models encode human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives.”[10] Seen through this lens, the notion of algorithmic bias is rather a misnomer: It is not the algorithm that is biased but the humans behind it.
Algorithmic bias can arise as a byproduct of two factors: biased datasets and biased design choices. On the one hand, a wealth of literature has identified the risk that algorithmic models replicate human biases found encoded in training datasets.[11] Reflecting on the use of large datasets in antitrust enforcement, Eleanor Fox noted that “[w]hen you talk about data, you also have to talk about values . . . And assumptions.”[12] To illustrate how datasets can prejudice against certain groups, take the example of facial-analysis datasets that contain a preponderance of lighter-skinned subjects: Reliance on such datasets leads to higher error rates for subjects with darker skin.[13]
However, it is not all about the input data. A less developed strand of literature highlights that algorithmic bias is not only a byproduct of the data being used, but rather of the “interactions between the data and the model design choices.”[14] Consequently, bias is learned and sometimes amplified, both by the goals of the model design and by reverberating the patterns found in training data.[15] This leads to the fundamental insight that the design of algorithms is not value-neutral, and the threshold rules or weights assigned in the algorithm may reveal human biases.[16]
While some ML models are easily explainable, other more complex algorithms pose challenges due to their complexity, hence the recurring metaphor of the algorithmic “black box.”[17] However, the black box nature of all ML is somewhat of a misnomer. Even in large neural networks, we know how algorithms make predictions and usually understand the information used. The problem is not that the decision-making process is unexplainable but rather that predictions are made in a way that is difficult for humans to grasp due to the millions of parameters involved.[18] This reality is often confused with the well-known statement that ML decisions do not inherently generate reasons or explanations. Unlike human decision-makers –– judges in particular –– algorithmic tools do not necessarily accompany their decisions with explanations of how the outcome was reached. Achieving full transparency in ML remains a work in progress, and efforts toward explainability are crucial for fostering public scrutiny and moral accountability in decision-making processes.
The problem of algorithmic bias in law enforcement is most acute in regulatory fields that deal directly with human subjects and their rights, such as criminal, migration, or social security law. A case in point is the 2021 Dutch childcare benefits scandal, in which “the Dutch Tax and Customs Administration used algorithms in which “foreign sounding names” and “dual nationality” were used as indicators of potential fraud. The result was that thousands of (racialized) low- and middle-income families were “subjected to scrutiny, falsely accused of fraud, and asked to pay back benefits that they had obtained completely legally.”[19]
C. Algorithmic Opportunity
The threat of algorithmic bias should not automatically translate into a wholesale rejection of the use of ML for law enforcement. To begin with, circling back to the insight that model design choices impact the functioning of algorithms, it is possible to make choices that circumvent potential biases enshrined in data pipelines.[20] Consequently, “recognizing how model design impacts harm opens up new mitigation techniques that are less burdensome than comprehensive data collection.”[21]
Not only does careful design enable avoiding passing-on human biases to algorithmic tools, but it also represents a material opportunity to overcome certain shortcomings of human decision-makers. We must reiterate that “human decision-making is not significantly more accountable than AI,”[22] some scholars contending that algorithms can help move away from subjective human judgments to data-driven decisions that may be more accurate and unbiased.[23] Following such reasoning, algorithms may “overcome the cognitive limits and social biases of human decision-makers, enabling more objective and fair decisions.”[24] Looking at the stages of developing ML tools,[25] the classification and decision rules that set thresholds for what optimal actions the model may take can consider the costs and benefits of making different decisions based on ML prediction under uncertainty.[26] Supervised ML expands on statistical decision theory that attempts to model the probability distribution of the possible, real-world effects for each decision option.[27] That can allow a decision-maker to add different weights to different risks or opportunities for algorithmic prediction as they attempt to operate under future uncertainty. Decision-makers already do this, but when modeling algorithms, the value judgments become clearer through encoded preferences.[28]
Moreover, there is a potential for algorithms to centralize decision-making. A “single algorithm has the potential to play the role of hundreds or thousands of human decision-makers,” creating a potentially more transparent and auditable process.[29] Reliance on ML can thus harmonize bodies of law that are currently made up of patchworks of decisions taken by agencies and courts. Furthermore, the use of AI can make it easier to audit and evaluate the fairness of decisions.[30] Unlike relying on hundreds or thousands of individual human decision-makers, a single algorithm can be monitored and assessed for biases.
In conclusion, the reality of algorithmic bias should not overshadow the potential benefits that algorithms can offer in antitrust enforcement. While concerns about bias are valid, dismissing algorithms altogether would mean disregarding the opportunity to improve decision-making and overcome existing human biases. By carefully designing, implementing, and overseeing algorithmic tools, it is possible to harness their power while minimizing the risks of bias. Striking a balance between the benefits and risks is crucial to navigating a path toward responsible and effective utilization of algorithms in antitrust enforcement.
III. ANTITRUST ENFORCEMENT AND ALGORITHMIC BIAS
Whereas legal bodies ranging from tax to criminal law have relied on algorithmic tools for decades, the use of algorithms in antitrust enforcement has lagged behind. The topic has matured into a key focus of the antitrust community since 2021 when the Computational Antitrust project was launched to “explore how legal informatics could foster the automation of antitrust procedures and the improvement of antitrust analysis.”[31] As the project highlights, using algorithms in antitrust enforcement is not only already a reality but also the future of effective enforcement in the context of constantly evolving markets. This section briefly documents algorithmic tools already deployed by antitrust agencies and then explains why the problem of algorithmic bias is less salient in the context of antitrust as compared to other legal fields.
A. Algorithms and Antitrust Enforcement: Existing Tools
Algorithms are already a reality in antitrust enforcement. The computational antitrust project ran an implementation survey in early 2022 in order to assess the current uses of computational tools by the participating antitrust agencies.[32] As this survey shows, antitrust agencies worldwide already rely on algorithmic tools for both procedural and substantive purposes. In terms of procedural aspects, such tools are used to (1) digitalize analog data collections through document management systems and (2) automatize procedural phases such as document submissions, leading to a significant speeding of antitrust investigations. When it comes to the substantive aspects of antitrust enforcement, the most widespread uses of computational tools are the following: (1) data mining and screening techniques for spotting markets and market structures that are most likely to facilitate collusion between market players; (2) information gathering algorithms for identifying price trends and their evolution over time; (3) ML algorithms for analyzing public procurement data in order to spot bid rigging practices.
The examples above testify that computational tools have become commonplace for competition authorities worldwide. Given their widespread uses, reflection on the potential biases of such tools is imperative.
B. Algorithmic Bias in Antitrust Enforcement
Within the landscape of all regulatory bodies, certain features of antitrust law single it out as particularly suited for the use of algorithmic tools. To begin with the most obvious, as opposed to bodies of law dealing directly with the rights and responsibilities of human subjects, the regulatory subject matter of antitrust laws are firms.[33] Consequently, risks of bias or discrimination operate on a different level. Instead of being biased towards certain population groups, antitrust enforcement can be biased towards certain types of firms based on their size or their origin, or towards achieving certain outcomes.[34] Whereas such biases have a detrimental impact on the capacity of antitrust to promote consumer welfare, they do not pose any threats to the fundamental interests of human subjects in the way in which biases in enforcing migration law, criminal law, and social security law do. Even in cases in which human biases do get passed on to algorithms tasked with enforcing antitrust laws, the effects are less consequential as compared to other fields of law.
Secondly, antitrust investigations are increasingly data-intensive. As markets grow more complex, understanding the impact of market behavior requires analyzing sizeable datasets. Even the first step of antitrust investigations, namely defining the relevant market, requires assessments of data points ranging from consumer preferences to transportation costs. More data translates into more precision. In the absence of exhaustive data, agencies need to rely on presumptions in order to filter out anticompetitive conduct. A case in point is the EU Digital Markets Act (“DMA”), which takes company size as a proxy for anticompetitive potential.[35] In this sense, the law is already biased against large platforms, and reliance on algorithmic tools that can inject more nuance into the antitrust analysis would be a move in the opposite direction, reducing bias instead of amplifying it. Furthermore, given the substantial corpus of antitrust decisions and cases, algorithmic tools can also be deployed to clarify existing law, harmonize decisional practice, and ultimately reduce the window of discretion of human decision-makers and thereby diminish the potential impact of their biases.[36]
Thirdly, antitrust laws are broad, open-textured provisions that have required courts to flash out their meaning.[37] This nature of antitrust laws has enabled the field to evolve organically and to adapt to changes in business dynamics. This is, however, a double-edged sword: Malleable laws are great for adapting to societal progress, yet this comes with costs for legal certainty and predictability.[38] Analyzing antitrust enforcement, Lim cautions against rejecting the use of algorithms on grounds of lack of transparency and potential bias. According to him, the existing rules and jurisprudence are themselves akin to a black box, leaving ample discretion for decision-makers to weigh costs, benefits, and counterfactuals.[39] Lim cites complaints from Chief Justice Roberts on the “amorphous [nature of the] rule of reason,”[40] and from Justice Breyer, who argues that implementing procompetitive benefits in the rule of reason analysis is an “absolute mystery.”[41] The move from human decision-makers to algorithmic tools can lead to an increase in decisional transparency. As the previous section has shown, it is possible to develop ML tools that can weigh the costs and benefits of intervention and identify the welfare-maximizing outcome based purely on efficiency considerations.
Fourthly, antitrust rules are static instruments that seek to regulate increasingly dynamic markets. Antitrust agencies from the EU to the U.S. are increasingly sympathetic to switching from existing open-ended laws to ex-ante rules that render certain types of conduct per se illegal.[42] In the context of rapidly evolving markets, reliance on detailed regulatory instruments that contain absolute prohibitions is inherently biased toward the status quo and privileges certain market structures at the expense of others. Contrastingly, leveraging algorithmic tools represents a way to circumvent this bias, potentially enabling agencies to fine-tune existing legislation in order to make it more receptive to the dynamics of contemporary markets. This would represent the chance to move from a reactive approach to antitrust enforcement to an adaptive one, mindful of market developments. As an example, Pentland and his co-authors propose expanding the definition of monopoly power and of the premerger review process in order to take into account the data-intensive nature of contemporary markets.[43] Moving away from the static analysis of dominance/monopoly power based on market share thresholds as a proxy, the authors propose a fresh analysis factoring in the degree of “data control” of the entities at stake.
IV. CONCLUSION
In this article, we seek to challenge the persistent assumption that the use of algorithms in law enforcement in general and in antitrust in particular is at insurmountable risk of bias and discriminatory outcomes. The potential benefits for antitrust enforcement should not be so readily dismissed: In certain respects, reliance on algorithmic tools can even diminish the risks of bias. Furthermore, antitrust law is particularly well suited for the use of algorithmic tools for several reasons: First, its regulatory targets are firms, not citizens; second, antitrust investigations are data-intensive; third, antitrust laws are notoriously open-ended, leaving outsize discretion to human decision-makers; fourth, antitrust needs to adapt to rapidly evolving market dynamics.
[1] Holli Sargeant is a PhD Candidate at the Faculty of Law, University of Cambridge. Teodora Groza is a PhD Candidate at Sciences Po Law School and Editor-in-Chief of the Stanford Computational Antitrust Journal.
[2] European Union Agency for Fundamental Rights, Bias in Algorithms: Artificial Intelligence and Discrimination 3 (2022), https://data.europa.eu/doi/10.2811/536044.
[3] Lori S Bennear & Jonathan B Wiener, Adaptive Regulation: Instrument Choice for Policy Learning over Time, Draft working paper (2019), https://www.hks.harvard.edu/sites/default/files/centers/mrcbg/files/Regulation%20-%20adaptive%20reg%20-%20Bennear%20Wiener%20on%20Adaptive%20Reg%20Instrum%20Choice%202019%2002%2012%20clean.pdf.
[4] Miri Zilka, Holli Sargeant & Adrian Weller, Transparency, Governance and Regulation of Algorithmic Tools Deployed in the Criminal Justice System: a UK Case Study, in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (2022), https://doi.org/10.1145/3514094.3534200 (last visited Jun 8, 2022).
[5] See e.g. Julia Angwin et al., Machine Bias, ProPublica (2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Zilka, Sargeant & Weller, supra note 4.
[6] Jordan Hayne & Matthew Doran, Government to pay back $721m in Robodebt, all debts to be waived, ABC News, May 29, 2020, https://www.abc.net.au/news/2020-05-29/federal-government-refund-robodebt-scheme-repay-debts/12299410 (last visited May 30, 2023).
[7] Matthew Doran, Federal Government ends Robodebt class action with settlement worth $1.2 billion, ABC News, Nov. 16, 2020, https://www.abc.net.au/news/2020-11-16/government-response-robodebt-class-action/12886784 (last visited May 30, 2023); Australian Human Rights Commission, Human Rights and Technology, (2021).
[8] Australian Human Rights Commission, Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias, (2020), https://humanrights.gov.au/our-work/rights-and-freedoms/publications/using-artificial-intelligence-make-decisions-addressing.
[9] PricewaterhouseCoopers, Understanding algorithmic bias and how to build trust in AI, PwC (2022), https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html (last visited May 30, 2023).
[10] Cathy O’Neil, Weapons of math destruction: how big data increases inequality and threatens democracy (2016).
[11] Robin Nunn, Discrimination in the Age of Algorithms, in The Cambridge Handbook of the Law of Algorithms (Woodrow Barfield ed., 2020), https://doi.org/10.1017/9781108680844.010; Deborah Hellman, Measuring Algorithmic Fairness, 106 Va. Law Rev. 811 (2020); Jon Kleinberg et al., Discrimination in the Age of Algorithms, 10 Journal of Legal Analysis 113 (2018).
[12] Andrew Ross Sorkin et al., Can Jane Fraser Fix Citigroup?, The New York Times, Feb. 11, 2021, https://www.nytimes.com/2021/02/11/business/dealbook/jane-fraser-citigroup.html (last visited Apr 4, 2023).
[13] Sara Hooker, Moving beyond “algorithmic bias is a data problem,” 2 Patterns (2021), https://doi.org/10.1016/j.patter.2021.100241.
[14] Id.
[15] Jeremias Adams-Prassl, Reuben Binns & Aislinn Kelly-Lyth, Directly Discriminatory Algorithms, 86 The Modern Law Review 144 (2023), https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2230.12759 (last visited Feb 19, 2023); Reuben Binns, Fairness in Machine Learning: Lessons from Political Philosophy, 81 149 (2021), http://arxiv.org/abs/1712.03586 (last visited Nov 5, 2021); Sam Corbett-Davies & Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning, arXiv (2018); Sahil Verma & Julia Rubin, Fairness Definitions Explained, in Proceedings of the International Workshop on Software Fairness 1 (2018), https://doi.org/10.1145/3194770.3194776.
[16] Dan Burk, Algorithmic Fair Use, 86 University of Chicago Law Review (2019).
[17] Pantelis Linardatos, Vasilis Papastefanopoulos & Sotiris Kotsiantis, Explainable AI: A Review of Machine Learning Interpretability Methods, 23 Entropy 18 (2020), https://www.mdpi.com/1099-4300/23/1/18 (last visited May 23, 2022).
[18] Id.; Finale Doshi-Velez & Been Kim, Towards A Rigorous Science of Interpretable Machine Learning, arXiv (2017), http://arxiv.org/abs/1702.08608.
[19] European Parliament, The Dutch childcare benefit scandal, institutional racism and algorithms, Parliamentary question – O-000028/2022 (28.6.2022), https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html (last visited May 30, 2023).
[20] Hooker, supra note 13.
[21] Id.
[22] Lim, supra note 11, at 48.
[23] Kleinberg et al., supra note 11; Alice Xiang, Reconciling Legal and Technical Approaches to Algorithmic Bias, 88 Tenn. Law Rev. 649 (2021); Virginia Eubanks, Automating Inequality (2018); Alex P. Miller, Want Less-Biased Decisions? Use Algorithms., Harvard Business Review, Jul. 2018, https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms (last visited May 26, 2023).
[24] Ben Green, Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness, 35 Philos. Technol. 90 (2022), https://link.springer.com/10.1007/s13347-022-00584-6 (last visited Jan 22, 2023); See also S.1593—Pretrial Integrity and Safety Act of 2017, (2017), http://www.congress.gov/ (last visited May 26, 2023); Cass R. Sunstein, Two Conceptions of Procedural Fairness, 73 Social Research 619 (2006), http://www.jstor.org/stable/40971840 (last visited Aug 23, 2022).
[25] See discussion in Holli Sargeant, Algorithmic decision-making in financial services: economic and normative outcomes in consumer credit, AI Ethics (2022), https://doi.org/10.1007/s43681-022-00236-7 (last visited Nov 23, 2022).
[26] Ethem Alpaydin, Introduction to Machine Learning (4 ed. 2020), https://doi.org/10.7551/mitpress/13811.001.0001; Trevor Hastie, Robert Tibshirani & Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2 ed. 2009).
[27] Alpaydin, supra note 26; Hastie, Tibshirani & Friedman, supra note 26.
[28] Jon Kleinberg et al., Human Decisions and Machine Predictions*, 133 The Quarterly Journal of Economics 237 (2018), https://doi.org/10.1093/qje/qjx032 (last visited Nov 22, 2022).
[29] Xiang, supra note 23.
[30] Adriano Koshiyama, Emre Kazim & Philip Treleaven, Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms, 55 Computer 40 (2022); Alfred Ng, Can Auditing Eliminate Bias from Algorithms?, The Markup (2021), https://themarkup.org/the-breakdown/2021/02/23/can-auditing-eliminate-bias-from-algorithms (last visited Sep 5, 2022); Pauline Kim, Auditing Algorithms for Discrimination, 166 University of Pennsylvania Law Review Online (2017), https://scholarship.law.upenn.edu/penn_law_review_online/vol166/iss1/10.
[31] Stanford Law School, Codex Project: Stanford Center for Legal Informatics, Stanford Law School, https://law.stanford.edu/codex-the-stanford-center-for-legal-informatics/computational-antitrust-project/ (last visited May 30, 2023).
[32] Thibault Schrepel & Teodora Groza, The Adoption of Computational Antitrust by Agencies: 2021 Report, 2 Stanford Computational Antitrust 79 (2022).
[33] Edward Rock, Corporate Law Through an Antitrust Lens, 92 Colum. L. Rev. 497 (1992), https://scholarship.law.upenn.edu/faculty_scholarship/723.
[34] Anu Bradford, Robert Jackson & Jonathon Zytnick, Is EU Merger Control Used for Protectionism? An Empirical Analysis, 15 J. Empirical Legal Stud. 165 (2018), https://scholarship.law.columbia.edu/faculty_scholarship/2093.
[35] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), 265 OJ L (2022).
[36] Kleinberg et al., supra note 28.
[37] Daniel Crane, Antitrust Antitextualism, 96 Notre Dame L. Rev. 1205 (2021).
[38] Teodora Groza et. al., Exploring Computational Antitrust: A Theoretical Excursus, Sciences po l. rev. (forthcoming).
[39] Kleinberg et al., supra note 28.
[40] FTC v. Actavis, Inc., 570 U.S. 136, 160 (2013) (Roberts, C.J., dissenting).
[41] Transcript of Oral Argument at 24, Ohio v. Am. Express Co., 138 S. Ct. 2274 (2018) (No. 16-1454).
[42] Aurelien Portuese, American Precautionary Antitrust: Unrestrained FTC Rulemaking Authority, Info. Tech. & Innovation Found. (2022).
[43] Robert Mahari, Sandro Claudio Lera & Alex Pentland, Time for a new antitrust era: refocusing antitrust law to invigorate competition in the 21st century, 1 Stanford Computational Antitrust (2021).