The questions in FTC v. Qualcomm are consequential in setting competitive norms in an economy anxious about the exercise of market power. Like many other antitrust cases, this one shows symptoms of antitrust law’s inherent vulnerability to ideology stampeding facts and data. Seen as an algorithm, antitrust has had patches and updates over the years. Still, few have recognized the breadth and depth of transformation artificial intelligence (“AI”) can bring to antitrust adjudication. AI enables courts to better render evidence-based decisions. As a tool, it is non-ideological and enables courts to minimize ideological stampeding. As a powerful new partner in making sense of the complex, dynamic, and fast-moving licensing markets many businesses operate in, courts and agencies can harness its ability to model price and innovation effects more precisely. There are challenges to implementing AI with data accountability, data availability, and data bias. These challenges can be addressed. The time to retool antitrust is now.

By Daryl Lim 1

 

I. INTRODUCTION

Many antitrust stakeholders will remember 2020 as the year of its rebirth. Congressional politics, network economics, missteps on privacy, and corporate hubris converged to catalyze a profound reassessment of antitrust law. While much of the attention over excessive private power has focused on Facebook, Google, Amazon, and Apple, it remains to be seen whether current efforts to rein them in will generate any meaningful change. In the meantime, the Federal Trade Commission (“FTC”)’s case against Qualcomm has already been heralded as “the biggest decision since the Microsoft case.”2 Unfortunately, that case has also revealed a disquieting arbitrariness in antitrust law, one that artificial intelligence can help fix.3

 

II. WHEN DOES “HYPERCOMPETITIVE” BECOME “ANTICOMPETITIVE”?

During the Obama Administration’s waning days, the FTC voted 2-1 to sue Qualcomm, the world’s largest modem chipmaker. On the strength of a two-year trial record and hundreds of pages of factual findings, the district court found for the FTC.4 Its 233-page judgment detailed Qualcomm’s “no license, no chips” licensing practices that threatened to cripple customers refusing its terms by cutting off their chip supply.5 The court concluded that Qualcomm’s “carrot and stick” strategy allowed it to impose an “artificial and anticompetitive surcharge” on rivals’ modem chip prices, entrenching its market power in violation of antitrust law.6

The U.S. Justice Department, in an extraordinary step, filed a Statement of Interest protesting the FTC’s case.7 In doing so, it simultaneously acted in favor of a private party and against its sister federal antitrust agency. The Statement urged the U.S. Court of Appeals for the Ninth Circuit to stay the injunction pending appeal. The FTC responded by noting that the Justice Department “mischaracterized” the district court’s analysis, asserting “unsubstantiated concerns” about the impact of the judgment on R&D, advocating for the law to immunize Qualcomm’s licensing practices from antitrust scrutiny.8 Undeterred, the Justice Department followed up with an amicus brief supporting Qualcomm, and took an even more extraordinary step by arguing on appeal alongside Qualcomm and against the FTC.9

The Ninth Circuit concluded that Qualcomm’s behavior was not anticompetitive, but merely “hypercompetitive.”10 In its view, Qualcomm’s licensing practices spurred fiercer competition, demanding plaintiffs show harm in the market rather than pin liability on higher prices.11

The fact that chip buyers were key participants in the relevant markets where Qualcomm and its rivals compete was irrelevant, as was the fact Qualcomm’s licensing practices would plausibly harm competition in both the chipset market in the interrelated market for chipset licenses. Nor was the fact that Qualcomm’s licensing strategy enabled it to maintain its monopoly in the modem chip markets by manipulating the two components of the all-in price charged to manufacturers in such a way as to discourage them from buying chips from Qualcomm’s rivals. Throughout the opinion the Ninth Circuit criticized the district court’s focus on the harm to manufacturers rather than harm to the markets for modem chips, even when the case was based on manufacturer restrictions distorting competition in those chip markets.

The court was reticent “to ascribe antitrust liability in these dynamic and rapidly changing technology markets without clearer proof of anticompetitive effect.”12 Instead, it cautioned against mistakenly characterizing new technologies and new business strategies as anticompetitive, and limited breaches of standard-setting undertakings to patent or contractual remedies.13

In filing for an en banc rehearing, the FTC pointed to the panel’s disregard of precedent in “elevating patent-law labels over economic substance,” by “holding that harms to Qualcomm’s customers are ‘beyond the scope of antitrust law.’”14 The FTC argued that the panel should have seen Qualcomm’s anticompetitive ploy to secure its chip monopoly by penalizing rival products’ purchases. In particular, it argued that the court “seriously erred” when it dismissed the lower court’s “findings about the harm to [manufacturers] – including higher prices that are passed on to retail consumers – because [manufacturers] ‘are Qualcomm’s customers, not its competitors.’”15 While the Ninth Circuit has denied the FTC’s request for an en banc rehearing, the FTC’s path remains open to the petition the U.S. Supreme Court.

The FTC’s case against Qualcomm is symptomatic of how courts sometimes reach diametrically opposed conclusions based less on facts and data and more on the “ideological stampeding” of precedent to reach a desired outcome. As Professor Marina Lao noted:

[I]t is almost inevitable that a policymaker’s values will influence which theoretical models she will choose, whether her default is to intervene or not intervene if the theories and the evidence are indeterminate, what types of evidence she would consider relevant, and so forth. Her core economic and political beliefs will also likely affect her perspective on the aggregate social costs of false negatives relative to false positives, which will impact her judgment on whether liability should be found in a particular case or, indeed, whether a particular case should be brought in the first place.16

The same may be said of judges and enforcers. For instance, the district court favored a robust application of the Supreme Court’s decision in Aspen Skiing Co. v. Aspen Highlands Skiing Corp.,17 recognizing that the general freedom businesses had to choose who they wanted to deal with was fettered.18 The district court also pointed to evidence of Qualcomm’s evasion of a fair, reasonable and non-discriminatory (“FRAND”) requirement by licensing selectively only to noncompetitors undermines the competitive purpose of the standard setting organization (“SSO”) joint venture. 

The Ninth Circuit preferred the view that “businesses are free to choose the parties with whom they will deal, as well as the prices, terms, and conditions of that dealing.”19  It also leaned on “the persuasive policy arguments of several academics and practitioners with significant experience in SSOs, FRAND, and antitrust enforcement, who have expressed caution about using the antitrust laws to remedy what are essentially contractual disputes between private parties engaged in the pursuit of technological innovation.”20 Strikingly absent from that analysis was its own earlier decision in ITS v. Kodak,21 recognizing pretextual refusals to license patents to result in antitrust liability.

The cause of this ideological stampeding is partly inherent in the structural architecture of antitrust law. Operative terms like “anticompetitive harm” and “efficiencies” remain undefined until trial. Courts adjudicating licensing agreements may additionally have to determine consumer welfare effects in several markets and even how antitrust intervention might affect innovation incentives. Such tradeoffs run squarely into the problem Robert Pitofsky observed with antitrust adjudication proceeding by mere “hunch, faith, and intuition.”22

Antitrust law can and should do better. Until now, artificial intelligence (“AI”) has mostly been vilified as an enabler of collusion or for its complicity in abuses by internet gatekeepers like Google.23 Ironically, antitrust law itself operates like an algorithm.24 Harvard, Chicago, and even Neo-Brandeis are simply different algorithms judges use to operationalize the vacuous wording of antitrust statutes to improve competitive outcomes. For instance, today’s prevailing market algorithm is based on Chicago School economic policy that extolls regulatory intervention through robust property rights and economic liberalism. In the licensing context, it trusts licensors to commandeer the terms of their licenses in collaborating with licensees to meet consumer demand. Antitrust should be slow to disturb contested license terms since firms are assumed to already be putting resources to their most productive uses.

Chicago assumes judges fare poorly at distinguishing between restraints that degrade market denials from those that improve them. As Judge Easterbrook warned, “[o]nly someone with a very detailed knowledge of the market process, as well as the time and data needed for evaluation, would be able to answer that question. Sometimes no one can answer it.”  Accordingly, he called for ending antitrust enforcement until “doubts” about “the ability of courts to make things better even with the best data . . . have been overcome.”25 For over forty years, judges have largely done precisely that.26 With the advent of AI, however, the time has come for antitrust to do better.

 

III. ANTITRUST’S AI DEFICIT

AI enables courts to better render evidence-based decisions. As a tool, it is at its core, non-ideological and enables courts to minimize ideological stampeding. As a powerful new partner in making sense of the complex, dynamic, and fast-moving licensing markets many businesses operate in, courts and agencies can harness its ability to model price and innovation effects more precisely.

A. Minimizing Ideological Stampeding

Like many other areas of the law, antitrust analysis rests on analogical reasoning. Its method, however, is elusive. The rule of reason requires judges to determine if a licensing restriction is illegal based on vaguely worded antitrust statutes. Courts operationalizing probabilistic language like “plausible,” “potential,” and “likely,” are vulnerable to relying on idiosyncratic biases. This makes outcomes vulnerable to ideological stampeding as judges, forced to balance short-term losses against future predicted gains, may instead fall back on answering more straightforward but misleading proxy questions.27

For instance, courts may assign the probability of harm occurring based on whether they think defendants may lose their incentive to innovate if forced to grant access to its proprietary technology.28 Courts may also be swayed to insist that plaintiffs debunk a defendant’s purported efficiencies from an offending licensing restriction even before defendants have carried their burden in proving that the restriction is warranted. Moreover, work by Michael R. Baye & Joshua D. Wright reveals that judges routinely “delegate both factfinding and rulemaking to courtroom economists.”29

Today’s AI already scours depositions to provide quicker and more consistent analysis than attorneys can. AI can formalize and make explicit antitrust priorities, dampening the amplitude of ideological swings. Amazon’s Mechanical Turk and similar services can classify clear-cut cases based on previously defined parameters. Once the system has learned to identify features from a baseline of cases, deep learning algorithms can generate other examples via dataset augmentation. The algorithm can then compare the relevant facts to precedent. This is akin to the Socratic approach of lightly modifying hypotheticals.

By curating and synthesizing the applicable law, AI can narrow the range of acceptable legal analysis and dampen unhinged ideological swings based on the judge’s idiosyncrasies while minimizing errors from interpreting economic data. A Westlaw search reveals about ten thousand antitrust cases decided in the last hundred years alone, with over ninety percent from the last sixty years. Trained in classified cases, AI can help discern relevant factors determining case outcomes, including instances where those factors might be interrelated. Their impact on market conditions can be studied in ways similar to merger retrospectives that provide a rearview mirror for enforcers to review their earlier decisions and adjust algorithmic weights based on prevailing economic evidence where appropriate.  

With unsupervised machine learning, the algorithm can probe patterns through data mining. It can zero in on data clusters and find abstractions. It might show the factors courts use to stampede over other factors. It can also account for interactions among indicators attorneys and economists miss because of how the human mind contextualizes and associates familiar information.

Over time, AI can chart the antitrust landscape and provide checklists to delineate prohibited and permitted licensing conduct with finer degrees of risk. This makes compliance less a leap of “hunch, faith, and intuition” and empowers businesses to make informed and confident licensing decisions, whether as licensors or licensees. It can also subject rhetoric, such as those on patent holdup and patent holdouts, to more rigorous evidence-based tests. In doing so, AI can help generalize information from legal and market data points to mark the path toward achieving policy goals and perhaps be able to look back at a decision tree analysis to evaluate and refine the approach for the future.

B. Modeling Price and Innovation effects

Adjudicating licensing terms may require courts to determine if they disrupt the consumer-preference-signaling processes. Automakers in Qualcomm’s rehearing, including Honda, Toyota, Ford, and Tesla, in arguing for the Ninth Circuit ruling to be overturned, asserted that the outcome “poses significant threats to competition, consumer welfare, and innovation.”30 AI can help parties like these automakers develop evidence of anticompetitive effects or efficiencies and even predict the impact on nascent competition. This ability to forecast is particularly relevant to cases involving licensing terms, given that dynamic efficiency often lies at the heart of justifying restrictions.

It is well established that the arc of innovation follows an inverted “U-shape” curve.31 Competition increases innovation, but at a decreasing rate. Beyond a certain point, the rate of innovation becomes negative with increasing competition. AI can run simulations to determine optimal contestability conditions and better map synergies that affect innovation pathways by tracing user adoption of the technology.  Effective AI solutions will need to be evolutionary, processing its observations and presenting advice that keeps up with current market conditions.

CodeX, Stanford Law School’s Center for Legal Informatics, has taken on the mantle of institutional leadership in this regard. It has begun recruiting stakeholders, including academics and enforcers, to collectively consider what it calls “computational antitrust.”32  Described as “a new branch of legal informatics focused on the mechanization of antitrust analyses and procedures,”33 it aims to give companies “the tools to assess and ensure compliance with antitrust laws–before implementing new practices,” and “automate their interactions with antitrust agencies, starting with merger control.”34 Enforcers too “can use computational tools for improving their assessment of (anti-competitive) practices and mergers. They can also benefit from more accurate data and new methods to mechanize part of their activities.”35

 

IV. ADDRESSING THREE CHALLENGES OF AI IMPLEMENTATION

Integrating AI into antitrust will not be easy. System architects and engineers will need to deal with three principal challenges – data accountability, data bias, and data availability. With each one of these, the quest is not perfection but rather a better alternative to the status quo.

A. Data Accountability

Finding the ground truth in training data is a common challenge with AI.36 Powerful deep learning techniques may generate more accurate predictions but are often less interpretable, and algorithms can reach conclusions in ways that even data scientists cannot explain. An untrained neural network interrogates itself via a process of trial and error called reinforcement learning. It plays randomly and learns from each iteration to adjust weights and parameters, choosing advantageous moves with increasing finesse. AI outputs in these instances may not map to any common sense understanding of how the world works. 

The quest for algorithmic transparency is exacerbated by a shift toward algorithmic secrecy in the face of case law hostile to conferring patents over software. Patent reform to address this has long been in the works, but the end is nowhere in sight if the pharma and tech sectors remain divided.37 A more promising route is to rely on well-established remedies judges employ when issuing protective orders to safeguard litigants’ trade secrets, including making the algorithm available for in-camera examination or making it available under seal.

Dumbing down the AI can make them more parsable, but causes its problems. Apart from the fact that the system becomes less effective in its task, it makes the system more vulnerable to gaming and adversarial learning by regulated parties. Nor is full disclosure of a system’s source code and data a solution. Generalist judges may lack the technical understanding necessary to make sense of it. Moreover, human decision-making is not necessarily more accountable, and maybe less so.

Another problem is human bias. Courts and agencies assessing licenses rely on hypothesis-driven assessments that reflect human judgments about the likelihood of misconduct. These judgments, in turn, reflect the assumptions and biases of the individuals making them. Discrimination law, for example, seeks to interrogate decision-makers on whether their outcomes are justified. However, whether a justification exists says nothing about whether and to what extent they relied on them in making their decision. Seen in this light, anti-discriminatory rules seek accountability through explainability rather than transparency. This refocuses recognizes that the perfect is the enemy of good and seeks instead to first seek what is attainable.

In this regard, the better alternative is to mix modes of explanation to achieve better explainability. AI programs can give an accounting of the algorithm. This includes descriptions of the data, modeling choices, and factors that drive a model’s predictions. One way is through decision tree analysis which, by its very nature, provides the structure of the decision process and sheds light on how the algorithm reached the result.

Algorithms based on decision tree analysis could track the factors identified by case law with different features on various branches, such as the type of conduct, competitive impact, or market share. Alternatively, those factors could map to the facts of cases. While decision trees are dependent on data with features amenable to classification, if the cases can be sorted into different nodes, then the decision process would predict whether the case is anticompetitive but qualify that prediction with a probability by comparison with other cases generally sharing the same set of attributes.

Looking at fewer features also helps. If all example cases happened to be tech or drug cases, but that element was not included as a feature (or inferable from other features), then overfitting to tech or drugs would be much less likely. Finally, preprocessing also helps. This involves using unsupervised machine learning to find clusters that can be used as examples that another algorithm can learn to classify conceptually linked groups. Since legal reasoning often depends on finding similarities between fact patterns, rather than identifying each feature independently, features can be clustered and annotated by a set of shared attributes to provide a standard data structure.

B. Data Bias

Instances of AI bias on assessing loan credit risk and criminal recidivism based on race are well known.38 Even assuming the algorithms’ variables are not systemically biased, non‐facially biased determinants might themselves have arisen as proxies for a biased attribute. Such biases may occur when the dataset is underinclusive or when the dataset captures an under-representative sample of all the antitrust violations that do occur.

Licensing terms suggestive of anticompetitive conduct relies on comparing previously flagged terms, potentially reducing the AI’s accuracy. When this occurs, AI detection becomes dominated by superficial features from prior enforcement decision making. It ends up replicating idiosyncrasies rather than building richer and more precise models of noncompliance. This bias has manifested in predictive policing, resulting in police being deployed to the same neighborhoods regardless of their underlying crime rates.39 This bias is a systemic issue pervading AI enforcement tools because it is challenging to identify all true positives in the dataset. One solution is to include more examples in training and testing the function against other test examples.

Even when collected, data may become outdated when it needs frequent updating to reflect changes in the underlying environment. New roads will render navigational apps less accurate without updating the initial training data. At the same time, that concern should not be overstated, particularly when the context remains constant. In dynamic environments, feedback data obtained by mapping outcomes to the input data that generated predictions of those outcomes can continuously improve algorithms. This mapping becomes particularly helpful when there is considerable variation within clearly defined boundaries.

C. Data Availability

Data needs to be available in sufficient quantity and quality to train the algorithm. Antitrust data can be challenging to find. Machine learning algorithms generally use thousands, if not millions, of examples; data from reported antitrust cases are much sparser in comparison. The limited number of cases to train the network creates a risk of error. Data can also be challenging to find when individuals who do not directly benefit from providing it must cooperate. Documents may not be in a machine-readable format or require sufficient pre-processing. Developers also need to navigate data laws limiting collection, storage, and use of the data. 

Besides reported court cases, AI can calculate the profit implications of many market movements with automatic spider-bots crawling the Internet and gather massive amounts of price-related information. AI solution providers can also contract with expert classifications to create training data, procure them from existing sources like court records or public sources.

Like the law, machine learning algorithms rely on analogies as well. For example, Amazon might attempt to predict a buyer’s preferences by finding another browser with the most similar viewing history and then offering the item the second browser liked. Rather than training a network by exposing it repeatedly to examples, it searches a database of examples to find the nearest match. This searching obviates both the need for enormous datasets and the need to train them. Therefore, the AI can predict whether a business practice was anticompetitive by merely relying on whether the most similar case had been held to be so. The rule of reason has been notoriously resistant to a broad application but finding a close match may be sufficient to resolve an issue satisfactorily, and the technique can be applied more broadly.

 

V. CONCLUSION

The legal questions in Qualcomm are consequential in setting competitive norms in the telecommunications industry and influencing antitrust law in an economy anxious about the exercise of market power. Like many other antitrust cases, Qualcomm also shows symptoms of antitrust law’s inherent vulnerability to ideology stampeding facts and data. In cases involving license terms, the overlay of innovation considerations as well as the dynamic and complex markets those involved operate in can make antitrust analysis even more amorphous.

The history of antitrust as an algorithm has been one of constant patches and updates. Still, few have recognized, much less captured in any substantial detail, the breadth and depth of transformation that AI can bring to adjudicating licenses. Chicago brought with it a skepticism toward intervention that arose in response concerns that successful businesses were being persecute with an unhinged and unsubstantiated zeal. However, Chicago is a reactionary non-interventionist phenomenon that has failed to stem many of the worst symptoms of market failure. As a phenomenon, all current approaches unfortunately are fundamentally backward looking and have no effective means of dealing with these symptoms.

AI enables courts to better render evidence-based decisions. As a tool, it presents an opportunity to check and minimize ideological stampeding. As a powerful new partner in making sense of the complex, dynamic, and fast-moving licensing markets many businesses operate in, courts and agencies can harness its ability to model price and innovation effects more precisely. There are challenges to implementing AI with data accountability, data availability, and data bias. These challenges can be addressed. The time for to retool antitrust is now.


1 Daryl Lim is a Professor of Law and the Director of the Center for Intellectual Property (IP), Information & Privacy Law at UIC John Marshall Law School. His teaching and research interests include all areas of IP law, antitrust law, and civil procedure.

2 Shara Tibken, Qualcomm is a Monopoly and Must Renegotiate Deals, Judge Rules, CNet (May 22, 2019).

3 Adapted from Daryl Lim, Futurecasting Antitrust (publication anticipated in 2021/22).    

4 Fed. Trade Comm’n v. Qualcomm Inc., 411 F. Supp. 3d 658 (N.D. Cal. 2019).

5 Id. at 703.

6 Id. at 698). 

7 United States’ Statement of Interest Concerning Qualcomm’s Motion for Partial Stay of Injunction Pending Appeal in Federal Trade Commission v. Qualcomm Inc. (July 16, 2019), available at https://www.justice.gov/atr/case-document/369345.

8 See Baker & Hostetler LLP, Antitrust Agency Turf War Over Big Tech Investigations (Oct. 9, 2019).

9 Brief of the United States of America as Amicus Curiae in Support of Appellant and Vacatur, Fed. Trade Comm’n v. Qualcomm Inc., No. 19-16122 (9th Cir. 2020) (August 30, 2019). See also Kristen Osenga, Anticompetitive or Hyper-Competitive? An Analysis of the FTC v. Qualcomm Oral Argument, IPWatchdog (February 20, 2020), available at https://www.ipwatchdog.com/2020/02/20/anticompetitive-hyper-competitive-analysis-ftc-v-qualcomm-oral-argument/id=119124/.

10 Fed. Trade Comm’n v. Qualcomm Inc., 969 F.3d 974, 982 (9th Cir. 2020).

11 Id. at 1003.

12 Id.

13 Id. at 1005.

14 Petition of the Federal Trade Commission for Rehearing En Banc, Case: 1 -16122, at 9 (Sept. 25, 2020). Fed. Trade Comm’n v. Qualcomm Inc., Case No. 19-16122.

15 Id. at 16.

16 Marina Lao, Ideology Matters in the Antitrust Debate, 79 Antitrust L.J. 649, 653 (2014).

17 472 U.S. 585 (1985).

18 Qualcomm., supra note 4 at 760.

19 Qualcomm, supra note 10 at 997.

20 Id.

21 Image Tech. Servs., Inc. v. Eastman Kodak Co., 125 F.3d 1195 (9th Cir. 1997).

22 Robert Pitofsky, The Political Content of Antitrust, 127 U. Pa. L. Rev. 1051, 1065 (1979).

23 See e.g. Ioannis Kokkoris, A Few Reflections on the Recent Caselaw on Algorithmic Collusion, Competition Policy Int’l, Antitrust Chronicle (July 2020). See also Ariel Ezrachi & Maurice E. Stucke, Artificial Intelligence & Collusion: When Computers Inhibit Competition, 2017 U. Ill. L. Rev. 1775 (2017).

24 See generally, Ramsi Woodcock, The Market as a Learning Algorithm: Consequences for Regulation and Antitrust (2020), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3661971

25 Frank H. Easterbrook, Workable Antitrust Policy, 84 Mich. L. Rev. 1696, 1701 (1986).

26 Richard A. Posner, The Chicago School of Antitrust Analysis, 127 U. Pa. L. Rev. 925 (1979).

27 See e.g. Daryl Lim, Retooling the Patent-Antitrust Intersection: Insights from Behavioral Economics, 69 Baylor L. Rev. 124, 134 (2017).

28 See e.g. Daryl Lim, Predictive Analytics, 51 Loy. U. Chi. L.J. 161, 216 (2019).

29 Michael R. Baye & Joshua D. Wright, Is Antitrust Too Complicated for Generalist Judges? The Impact of Economic Complexity and Judicial Training on Appeals, 54 J L & Econ 1, 2 (2011).

30 Alison Frankel, Consumers, Scholars and Downstream Businesses back FTC bid for Qualcomm en banc at 9th Circuit, 28 Westlaw Journal Antitrust (Oct. 15, 2020).

31 Philippe Aghion, Nick Bloom, Richard Blundell, Rachel Griffith & Peter Howitt, Competition and Innovation: An Inverted-U Relationship, 120 Quarterly Journal Of Economics 720, 701-28 (2005).

32 Codex, Computational Antitrust, https://law.stanford.edu/projects/computational-antitrust/.

33 Id.

34 Id.

35 Id.

36 See e.g. David Freeman Engstrom, et al, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Stanford Law and Policy Lab 24 (February 2020) https://www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf.

37 AEI, 1 Year Later, Patent Eligibility Reform No Further Along, (August 14, 2020) https://www.aei.org/technology-and-innovation/1-year-later-patent-eligibility-reform-no-further-along/.

38 See generally, Meghan J. Ryan, Secret Algorithms, IP Rights, and the Public Interest, Nevada L. J., (forthcoming 2021) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3691765.

39 Id. at 25.