In current antitrust policy debates, it is almost a foregone conclusion that digital platforms’ collection and use of “big data” is a barrier to entry. In this article, we argue that big data should properly be considered a two-stage process. The reason why this classification matters is because it allows us to link big data to concepts that antitrust is already familiar with: economies of scale, learning by doing, and research & development. By linking big data with the familiar, we hope to avoid a common tendency in antitrust to condemn the strange.

Alexander Krzepicki, Joshua Wright, John Yun1

I. INTRODUCTION

An emerging refrain in antitrust dialog is that the accumulation and use of big data is a unique and particularly troublesome entry barrier, worthy of antitrust scrutiny. Yet, it seems that both the concept of big data and entry barriers continue to be used in a highly casual and superficial manner. Antitrust is a fact-intensive area of law, given the necessity to both under-stand a business practice (including its potential harms and benefits) and make forecasts of market performance. While antitrust jurisprudence has developed reasonable measures to facilitate such analyses — such as condemning price fixing as a per se violation — conduct such as vertical integration, resale price maintenance, and exclusive deals rightly require substantive inquiries to determine the ultimate competitive impact. Though some would

...
THIS ARTICLE IS NOT AVAILABLE FOR IP ADDRESS 18.97.9.168

Please verify email or join us
to access premium content!