In current antitrust policy debates, it is almost a foregone conclusion that digital platforms’ collection and use of “big data” is a barrier to entry. In this article, we argue that big data should properly be considered a two-stage process. The reason why this classification matters is because it allows us to link big data to concepts that antitrust is already familiar with: economies of scale, learning by doing, and research & development. By linking big data with the familiar, we hope to avoid a common tendency in antitrust to condemn the strange.

Alexander Krzepicki, Joshua Wright, John Yun1

I. INTRODUCTION

An emerging refrain in antitrust dialog is that the accumulation and use of big data is a unique and particularly troublesome entry barrier, worthy of antitrust scrutiny. Yet, it seems that both the concept of big data and entry barriers continue to be used in a highly casual and superficial manner. Antitrust is a fact-intensive area of law, given the necessity to both under-stand a business practice (including its potential harms and benefits) and make forecasts of market performance. While antitrust jurisprudence has developed reasonable measures to facilitate such analyses — such as condemning price fixing as a per se violation — conduct such as vertical integration, resale price maintenance, and exclusive deals rightly require substantive inquiries to determine the ultimate competitive impact. Though some would argue that the courts and agencies are committing too many false positives or negatives, in the end, there is broad agreement from serious antitrust practitioners and scholars that a rule of reason analysis requires the avoidance of reflexive labels.

In this article, we argue that big data should properly be considered a two-stage process. In stage one, a firm collects the data. In stage two, a firm transforms the data into some benefit that ultimately increases profitability. This classification matters because it allows us to link big data to concepts that antitrust is already familiar with — namely, economies of scale, learning by doing, and research & development.

By linking big data with the familiar, we hope to avoid a common tendency in antitrust to condemn the strange. The history of antitrust is littered with examples where scholars, agencies, and courts have been quick to condemn new and irregular practices.2 For instance, the following practices were subject to heavy anticompetitive judgment at one point, but are today almost universally acknowledged as having the potential to increase welfare: horizontal mergers in unconcentrated markets, vertical mergers, resale price maintenance, exclusive territories, and price discrimination.

We also discuss whether big data should be considered an entry barrier, which, in a broad and abstract sense, measures the relative difficulty of obtaining necessary inputs to production. For instance, if a firm monopolizes lithium, which is a critical and rare resource, then it can raise the price to a level that severely hinders its use. In a similar way, monopoly control over an idea due to a patent creates greater scarcity — even if only temporarily — while potentially raising prices and putting up a “barrier” to those who seek to enter. Yet, the terms “entry barriers” and “barriers to entry” are not well defined in antitrust and are used in different ways by different people.3 Therefore, we argue, as many have previously argued, that labeling an input as an entry barrier is generally unhelpful. Rather, what is necessary is a full-fledged entry analysis — as outlined in the Horizontal Merger Guidelines.4

Ultimately, the lesson that continues to be repeated in antitrust is that the impulse to condemn the strange as anticompetitive causes tangible harm to consumers. Big data, and, increasingly, artificial intelligence and machine learning, is receiving this treatment now. Circumspection and a visit from the ghost of antitrust past counsel some degree of prudence in how we incorporate big data into antitrust law.

II. BIG DATA: A TWO STEP PROCESS

In order to understand the role of big data in production, let us begin with a concept that is familiar in antitrust. Economy of scale is the idea that the average cost of production falls when, for instance, a firm makes 1,000 units compared to 100 units. This fall in costs is due to a decrease in input costs as the firm produces more output. In a sense, it is a residual benefit of higher production levels — for instance, it allows a firm to spread fixed (including sunk) costs over significantly more output and to purchase inputs at bulk discounts, which lower the average total cost. In a similar, but slightly different, way, the collection of big data is a residual of consumption. For example, when a multisided platform (e.g. Uber, Lyft, Juno) brings together two groups for a mutually beneficial exchange (e.g. passengers and drivers), this creates value for the two groups as well as the platform. During this exchange, the platform can also collect data. Unlike economies of scale, however, the mere collection of big data does not inevitably provide a benefit that results in higher profits. Rather, collected data provides a potential opportunity for higher profits.5 We can label this first stage, where the firm collects the data, as the “data-input” stage.

We can label the second stage as the “data-output” stage, where a firm takes the data and transforms it into something that creates value in the form of lower costs, improved quality, and innovative new products. This stage involves the task of combining the data with other resources and inputs, such as intellectual property, skilled labor, and capital infrastructure. Firms will have differential advantages and skills in this stage. By way of analogy, while academics all generally have access to the same scholarly journals, court cases, and, in many instances, data, the level of production will differ in terms of both quality and quantity, based on a variety of factors. Therefore, this second stage is more akin to learning by doing and research & development. In other words, this data-output stage is about innovation. The innovation could be, for example, improving a proprietary search algorithm or building a multi-dimensional profile of users on a social network for improved advertiser targeting. In either case, the firm is creating value that, in turn, increases profits.

Learning by doing is an economic concept that is like economy of scale in that it involves lower costs as output expands. The difference is that the lower costs are due to the cumulative effects of experience in production, which means a more efficient production process.6 In eco-nomics parlance, economy of scale involves lower input costs at higher levels of output due, in no small part, to spreading fixed costs over more output; while learning by doing involves lower costs because the production function becomes more “efficient” — even holding the input costs constant. In other words, a firm improves its productivity, which, in turn, lowers its per unit costs.7

Importantly, learning by doing is not a process that automatically occurs. In their detailed examination of how learning by doing is actually implemented at an automaker, Professors Steven Levitt, John List, and Chad Syverson conclude that in a “more full-fledged view of learning by doing, a producer’s experience gains do not so much cause efficiency enhancements themselves as they provide opportunities for management to exploit.”8 In the context of big data, economist Hal Varian states, “it can be somewhat misleading as it suggests that ‘learning’ is a passive activity that automatically happens as more output is produced. Nothing could be further from the truth. Learning by doing necessarily requires investment in data collection, analysis, and experimentation.”9

In a similar way, research & development (“R&D”) involves investing dedicated resources to generate new intellectual property, products, and processes. While a more successful firm will have greater capacity, experience, and accessible financial capital to invest in this process of generation, it still requires considerable risk, skill, and persistence to engage in successful R&D.

Conceptually, this need to expend resources and effort to successfully engage in learning by doing and R&D is a useful mapping for the data-output stage of big data. It suggests we should be skeptical about claims that big data somehow poses unique problems from an antitrust perspective. Like learning by doing and R&D, all firms have the capability and opportunity to use big data to improve profits through higher quality products or lower costs.

Of course, larger firms will have more data — just as larger firms will have more output and, likely, revenue. Yet, more output and revenue do not necessarily translate into anticompetitive outcomes — nor does having more data. Further, having more data confers a potential benefit with diminishing returns. The degree to which diminishing returns become a serious factor in the analysis will differ by firm and industry — yet the general principle remains.

III. BIG DATA AS AN ENTRY BARRIER

The concept of entry barriers has a long history in economics and antitrust. Despite this history, there is still no universal agreement on its use.10 The term “barriers to entry” is generally used in one of two ways. Professor Dennis Carlton summarizes this idea when he states, “Trying to use ‘barriers to entry’ to refer to both the factors that influence the time it takes to reach a new equilibrium and to whether there are excess long-run profits is confusing.”11 The former definition is consistent with the analysis of entry in the Horizontal Merger Guidelines, which involves assessing the timeliness, likelihood, and sufficiency of entry.12 The latter definition is more in line with the economics literature on barriers.13

The danger in labeling a factor of production, including big data, as a “barrier to entry” is a lack of clarity regarding which definition one is considering. All business ventures involve cost — including the cost of entry. Common examples include legal and regulatory costs, licensing and developing intellectual property, expenditures on specialized equipment, and hiring skilled labor. Merely identifying a set of costs that must be incurred to achieve entry and labeling them as “entry barriers” serves no real purpose. Either the term barriers to entry is explicitly stated and the welfare consequences evaluated,14 or, as Carlton recommends: “rather than focusing on whether an entry barrier exists according to some definition, analysts should explain how the industry will behave over the next several years . . . [which] will force them to pay attention to uncertainty and adjustment costs.”15

Consequently, it makes little sense to label big data as a barrier to entry and thereby treat it as an inevitable impediment to competition and consumer welfare.16 Effective investments in big data (along with machine learning and artificial intelligence) can certainly create competitive distance between rivals. Yet this distance is a byproduct of competition on the merits and, as numerous examples confirm (including the well-documented replacement of incumbents in numerous digital markets), is not necessarily an impediment to entry by innovative new firms.17 Rather than labeling big data as a barrier to entry, the focus should be on assessing what big data helps a firm accomplish — either in a welfare-enhancing or welfare-reducing manner.

In sum, a laudable goal in antitrust is to replace designating inputs as “barriers to entry” with a more fruitful, and relevant, entry analysis. This is the approach that the U.S. antitrust agencies have adopted, where the aim is to thoroughly “examine the timeliness, likelihood, and sufficiency of the entry efforts an entrant might practically employ.”18 What must be avoided are focusing on mere possibilities — both in terms of having no entry and having easy entry. These are shortcuts that provide policymakers and courts little ultimate guidance.

While the state of entry analysis in the courts is beyond the scope of this article, it is worth noting that the lack of precision and vague notions of “entry barriers” is a problem there, too. According to Professor Daniel Lazaroff, “the Supreme Court has really never provided a com-prehensive analysis of barriers to entry and their role in interpreting the Sherman, Clayton, and Federal Trade Commission Acts. Rather, the Court has periodically referenced entry barriers in antitrust cases, resulting in a somewhat cryptic and uncertain message to lower courts, litigants and students of antitrust law.”19 A brief survey of recent antitrust cases confirms Professor Lazaroff’s observation that the current treatment of entry barriers remains relatively perfunctory and lacking in clarity. For instance, conclusions about what constitutes an entry barrier verge on the contradictory.20 Some courts have characterized the existence of economies of scale in a market as an indication of barriers to entry; although, it would not be considered a barrier to entry under influential definitions, such as from Professor George Stigler.21 Even in cases where the discussion of entry barriers is more thorough, the lack of an accepted economic framework leaves courts fumbling in the dark when it comes to deciding the question.22 All told, this is an area of antitrust law ripe and yearning for a more formalized and rigorous scheme of analysis, grounded in sound economics.

IV. CONCLUSION

As interest in antitrust policy is expanding, the commitment to uphold the integrity of economic and legal analyses should be renewed. Little is to be gained from moving antitrust from a fact-based area of law to a more reflexive, rhetorical area. Consistent with the prior literature on big data, we argue that big data should be considered a two-stage process in antitrust analysis. This consideration allows antitrust law to properly consider big data in the context of a larger entry analysis.


1 J.D. Candidate, 2021, Antonin Scalia Law School, George Mason University; University of Virginia, B.S., Electrical Engineering, 2014; University Professor of Law and Executive Director, Global Antitrust Institute, Antonin Scalia Law School, George Mason University; Associate Professor of Law and Director of Economic Education, Global Antitrust Institute, Antonin Scalia Law School, George Mason University.

2 Professor Ronald Coase, in discussing the intersection of industrial organization and antitrust, finds the desire to be “of service to one’s fellows” in the realm of public policy has created a tendency that “has discouraged a critical questioning of the data and of the worth of the analysis, leading the many able scholars in this field to tolerate standards of evidence and analysis which, I believe, they would otherwise have rejected.” Further, “the association of the study of industrial organization with antitrust policy has created a disposition to search for monopolistic explanations for all business practices whose justification is not obvious to the meanest intelligence.” See Ronald H. Coase, Industrial Organization:

A Proposal for Research, in 3 Economic Research: Retrospect and Prospect: Policy Issues and Research Opportunities in Industrial Organization 66, 68 (Victor R. Fuchs, ed., 1972).

3 See Dennis W. Carlton, Barriers to Entry, 1 Issues in Competition L. & Pol’y 601 (2008).

4 U.S. Dep’t of Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (2010).

5 See, e.g. Hal Varian, Use and Abuse of Network Effects, in Towards a Just Society: Joseph Stiglitz and Twenty-First Century Economics 232 (Martin Guzman ed., 2018) (“Mere data by itself doesn’t confer a competitive advantage; that data has to be translated into information, knowledge, and action.”).

6 For one of the earliest formulations of the concept as it applies to firms, see Kenneth J. Arrow, The Economic Implications of Learning by Doing, 29 Rev. Econ. Studies 155 (1962).

7 This is just one formulation of learning by doing. See Peter Thompson, Learning by Doing, in Handbook of Economics of Innovation (Bronwyn Hall & Nathan Rosenberg, eds., 2010).

8 See Steven D. Levitt, John A. List & Chad Syverson, Toward an Understanding of Learning by Doing: Evidence from an Automobile Assembly Plant, 121 J. Pol. Econ. 643, 647 (2013); see also John M. Dutton & Annie Thomas, Treating Progress Functions as a Managerial Opportunity, 9 Acad. Mgmt. Rev. 235 (1984).

9 Varian, supra note 5, at 229–30.

10 See, e.g. W. Kip Viscusi, John M. Vernon & Joseph E. Harrington, Jr., Economics of Regulation and Antitrust 168 (4th ed., 2005) (“There is perhaps no subject that has created more controversy among industrial organization economists than that of barriers to entry. At one extreme, some economists argue that the only real barriers are government related . . . At the other end of the spectrum, some economists argue that almost any large expenditure necessary to start up a business is a barrier to entry.”).

11 Carlton, supra note 3, at 606.

12 Horizontal Merger Guidelines, supra note 4, § 9.

13 For an overview of the evolution of economic thought on barriers to entry, see R. Preston McAfee, Hugo M. Mialon, & Michael A. Williams, What is a Barrier to Entry?, 94 Am. Econ. Rev. 461 (2004).

14 See Carl C. von Weizsäcker, A Welfare Analysis of Barriers to Entry, 11 Bell J. Econ. 399 (1980).

15 Carlton, supra note 3, at 615. Similarly, Demsetz observed that conditions frequently considered barriers to entry, such as scale economies, capital requirements, and advertising expenditures, are not the fundamental source of barriers; the fundamental barriers are rather the cost of information and the uncertainty that an entrant has to overcome. See Harold Demsetz, Barriers to Entry, 72 Am. Econ. Rev. 47 (1982).

16 Professors Anja Lambrecht & Catherine Tucker arrive at this conclusion through a slightly different approach. See Anja Lambrecht & Catherine Tucker, Can Big Data Protect a Firm from Competition?, CPI Antitrust Chronicle, January 2017, at 8 (“For a wide range of examples from the digital economy we demonstrate that when firms have access to big data, at least one, and often more, of the four criteria which are required for a resource to constitute a sustainable competitive advantage are not met.”).

17 See Competition Bureau Canada, Big Data and Innovation: Implications for Competition Policy in Canada 14 (2017), https://www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/vwapj/CB-Report-BigData-Eng.pdf/$file/CB-Report-BigData-Eng.pdf (“Developing valuable data through competition on the merits does not run afoul of the Act even if it results in significant market power. For example, a firm can create market power by developing a high-quality product or an efficient production process.”).

18 Horizontal Merger Guidelines, supra note 4, § 9.

19 Daniel E. Lazaroff, Entry Barriers and Contemporary Antitrust Litigation, 7 U.C. Davis Bus. L.J. 1 (2006), https://blj.ucdavis.edu/archives/vol-7-no-1/Entry-Barriers-and-Con-temporary-Antitrust-Litigation.html

20 Compare GDHI Mktg. LLC v. Antsel Mktg. LLC, No. 18-CV-2672-MSK-NRN, 2019 WL 4572853 at *9 n.5 (D. Colo. Sept. 20, 2019) (“The mere cost of capital is not a barrier to entry.”) with Philadelphia Taxi Ass’n, Inc v. Uber Techs., Inc., 886 F.3d 332, 342 (3d Cir. 2018) (“Entry barriers include . . . high capital costs.”).

21 See, e.g. N.M. Oncology v. Presbyterian Healthcare Servs., No. CV 12-00526 MV/GBW, 2019 WL 6040036, at *7 (D.N.M. Nov. 14, 2019) (“[T]here are significant and continuing barriers to entry into the relevant markets. First, there can be no dispute that any new insurer would need to build a provider network.”); see George J. Stigler, The Organization of Industry 67 (New ed. 1983); John M. Yun, Antitrust After Big Data, 4 Criterion J. Innovation 407, 421 (2019).

22 See Buccaneer Energy (USA) Inc. v. Gunnison Energy Corp., 846 F.3d 1297, 1316–17 (10th Cir. 2017).