In response to the growing concerns around artificial intelligence, algorithms, and their influence over consumers’ choices, competition authorities have adopted more stringent rules regarding self-preferencing algorithms used by digital platforms. However, from a theoretical perspective, self-preferencing algorithms can have pro-competitive benefits. There is no consensus from the economic literature on whether pro-competitive benefits or possible anti-competitive considerations prevail in the context of self-preferencing algorithms used by digital platforms. Determining the net impact of recommendation algorithms on competition and consumer welfare requires individualized analysis accounting for the workings of specific algorithms, competitive context, and market environment.

By Emilie Feyler & Veronica Postal[1]


The role of algorithms and artificial intelligence (“AI”) with respect to people’s consumption choices and everyday decision-making has been growing hand-in-hand with the size of the digital economy. For example, about 80 percent of the content streamed on Netflix is the result of algorithmic recommendation, while only 20 percent is streamed through active user search.[2] The public launch of ChatGPT in November 2022 has pushed the boundary of what people believed AI systems could achieve further than ever. While the advances in deep learning technologies and their application to a wide range of industries opens numerous opportunities, the use of such algorithms has also raised a number of concerns with respect to ethics, privacy, security, and bias.

The notion of “algorithmic bias” refers to errors in a computer system that create “unfair” outcomes, for instance, by privileging one group of users over another. Algorithmic bias can emerge for several reasons, including a malfunction in the design of the algorithm, or the reliance on incomplete or non-representative data. Ultimately, algorithms are designed by humans, and thus their architecture involves some measure of subjective judgment with respect to their parameters and underlying assumptions. The risk of unintentional bias has been exacerbated by the development of AI and semi-supervised learning methods that allow for self-training algorithms without requiring inputs from the algorithm creator. Overall, the concerns relate to the lack of transparency of “black box” algorithms, and the fear that their users (and sometimes their own creators) do not truly understand how they work and how results are generated. Recently, a group of AI experts has urged to “immediately pause for at least 6 months the training of AI systems,” to better understand AI’s repercussions on the economy and society at large.[3]

In this article, we focus on algorithms that are allegedly “biased” by their creator to privilege their own content and products, the so-called “self-preferencing” algorithms. For example, according to a recent investigation by the European Commission (“EC”), Google Search’s algorithm provided preferential treatment for the firm’s own comparison-shopping service, Google Shopping, over rivals’ comparison-shopping services (e.g. Amazon and eBay).[4] There has been active debate regarding the competitive implications of such algorithmic self-preferencing practices in digital markets, where large platforms often have a dual role: they operate the digital marketplace as information intermediaries and are also players of the marketplace, offering their own competing products.

Competition authorities around the world are increasing their scrutiny of digital markets and purported instances of self-preferencing. The European Union’s Digital Markets Act (“DMA”), which went into effect in November 2022, explicitly prohibits self-preferencing practices for digital platforms, and proposed legislation in the United States, such as the American Innovation and Choice Online Act, would impose similar restrictions, although for the time being, no such bills have been passed in the U.S. Congress. However, the economic literature is not conclusive about whether self-preferencing algorithms are inherently harmful to consumers and competition. As discussed in this article, such algorithms can have pro-competitive benefits. Understanding whether pro-competitive benefits or potential anti-competitive considerations prevail requires careful analysis conducted on a case-by-case basis.



From an economic perspective, self-preferencing practices are not something new. Companies have been favoring their own products and services in downstream markets over their rivals’ in disparate industries for a long time. For instance, in the retail sector, it is a common business practice for supermarkets to advantage the shelf display of their private labels products over competitors’ brands. Recommendation algorithms can be considered the digital equivalent of such practices if they privilege the product of the owner of the platform over competitors’ products. As such, the competitive implications of self-preferencing practices in digital markets can be explained using standard economic mechanisms. Indeed, the pro- and anti-competitive considerations of recommendation algorithms are similar to those of steering and tying practices.[5] On one hand, steering and tying practices can increase convenience for consumers, product quality, and incentives for innovation. On the other hand, they can allow firms to leverage their dominant position in one market to acquire monopoly power in a second market, or they could be used as an exclusionary tool against competitors or new entrants. Courts have generally adopted a “rule of reason” approach to traditional tying cases, weighing the pro-competitive benefits against the potential anti-competitive considerations, if any, on a case-by-case basis.[6]

The traditional antitrust concerns towards tying and steering practices have found a new dimension in the context of digital platforms and recommendation algorithms. We identify two main reasons why the antitrust analysis of self-preferencing algorithms is more complex than traditional cases: self-preferencing algorithms can be arduous to detect, and they can have unconscious influence over users’ choices.

As a preliminary issue, it is difficult to assess the extent to which self-preferencing algorithms exist on a digital platform. One needs to investigate the underlying complexities of the algorithm at issue to know whether specific recommendations are the result of steering practices as opposed to an objective ranking of products based on consumers’ preferences, products’ quality, or prices. Therefore, algorithms that unfairly advantage certain products may be harder to detect for competition authorities, users, or competitors than traditional self-preferencing practices in brick-and-mortar businesses.

In addition, algorithms can represent a “black box” to users, who may not understand how recommendations are generated and whether they stem from past behavior, prices, product quality, or other factors. Researchers and policymakers have raised concerns that digital platforms could take advantage of the opacity of these recommendation algorithms to predominantly display their own products and services, even though consumers may prefer other, cheaper, or higher-quality products.[7] In other words, the concern is that consumers, now more than ever before, could be influenced by such algorithms without full cognitive awareness, and may make decisions that are inconsistent with their preferences.

Regulators and courts around the world have released a number of decisions on cases involving algorithmic self-preferencing practices. As described below, courts and competition authorities are deviating from the traditional “rule of reason” approach and are moving towards the full prohibition of self-preferencing practices for digital platforms.



The wave of increased scrutiny of self-preferencing algorithms started in the early 2010s, when competition authorities in the United States and Europe started investigating whether Google biased its search results by allegedly promoting its own content and selectively demoting competitors’ content. Despite conflicting findings of the U.S. Federal Trade Commission (“FTC”) and of the EC on whether the alleged self-preferencing constituted an illegal abuse of market power, antitrust authorities around the world initiated a wave of investigations into similar practices by other technology companies.[8]

In the last few years, several countries adopted landmark legislation aimed at prohibiting self-preferencing for digital platforms, among other measures intended to enhance the protection of competition in the digital economy. A January 2021 amendment to the German Act against Restraints of Competition overhauled German competition law, allowing the German Federal Cartel Office (“FCO”) to prohibit certain conduct (including self-preferencing) by digital platforms when a company’s market position is found to be of “paramount significance across markets.” By June 2021, the FCO had opened an investigation against Apple, Facebook, Amazon, and Google, and in 2022, it issued a ground-breaking decision declaring that Alphabet Inc., Google’s parent company, and Meta, Facebook’s parent company, are of “paramount significance for competition across markets” in Germany, and thus may prohibit these companies from engaging in purported self-preferencing behavior for a period of five years.[9]

The European Union followed suit with the DMA in November 2022, a wide-ranging piece of legislation aimed at regulating “unfair” business practices by large online platforms designated as gatekeepers between European businesses and consumers.[10] The DMA explicitly bans self-preferencing, stating that “the gatekeeper shall not treat more favourably, in ranking and related indexing and crawling, services and products offered by the gatekeeper itself than similar services or products of a third party.”[11]

U.S. lawmakers have also considered putting legislation in place to regulate self-preferencing by platform operators several times in recent years. An initial attempt at prohibiting self-preferencing by digital platforms was made in August 2021 with the Open App Markets Act, but the bill was ultimately not passed by the U.S. Congress. Another attempt was made in October 2021 with the introduction of the American Innovation and Choice Online Act, a bill focusing on regulating big tech companies and limiting self-preferencing by platform operators.[12] Neither bill was enacted and no further bills on this issue have been introduced. A July 2022 bipartisan report issued by the U.S. House Committee on the Judiciary on competition in digital markets called for regulation of various practices adopted by technology platforms, including the use of purported “self-preferencing” in algorithms.[13]



Although the rules imposed by competition authorities regarding self-preferencing algorithms are increasingly stringent, few economic studies have been conducted to assess whether such algorithms are indeed harmful to consumers, social welfare, and competition, or whether the pro-competitive benefits of such practices may outweigh the potential anti-competitive considerations. The findings of this literature have been ambiguous, suggesting that self-preferencing algorithms’ effects on competition vary on a case-by-case basis.[14]

Some theoretical papers have shown that self-preferencing behavior by digital platforms does not necessarily harm competition. De Cornière & Taylor (2014) find that self-preferencing behavior by search engines for advertising may not be harmful to consumers and may in fact provide better content to consumers by reducing the nuisance costs due to excessive advertising.[15] Their model predicts that the increased revenue through sponsored ads could enable the search engine to reduce the number of advertisements displayed and therefore increase users’ utility level. In a more recent article, the authors find that self-preferencing behavior by information intermediaries may benefit consumers in certain market environments, for instance, if firms compete on quality.[16] Zennyo (2022) puts forth an economic model showing that self-preferencing behavior by digital platforms can benefit consumers through lowered commission fees to third-party sellers, allowing them to decrease their prices, and in turn attracting more consumers and more third-party sellers onto the platform.[17]  The economic intuition is that algorithmic self-preferencing enables the platform to sell its own products more effectively, which increases its expected profit per consumer and therefore increases its incentives to attract more consumers. To this end, the platform has economic incentives to reduce commission fees to make the resulting consumer prices lower, increasing consumer participation. The increase in consumer participation in turn stimulates third-party seller participation.

The economic literature has also investigated the effectiveness of policy intervention against self-preferencing algorithms, including behavioral and structural remedies, but existing studies have yielded ambiguous results. Hagiu, et al. (2022) claim that banning the platform’s marketplace (i.e. preventing the platform to sell its own products) would likely result in lower consumer surplus and lower social welfare.[18] Zennyo (2022) finds that the separation of the marketplace and reseller divisions, which also prohibits the platform from self-preferencing, may be detrimental to consumers and third-party sellers. In this model, structural separation leads the platform to raise commission fees to third-party sellers, which in turn results in higher consumers’ prices and lower consumer surplus. According to Kittaka & Sato (2022), while the prohibition of self-preferencing practices of dual-role platforms may have adverse effects, structural separation could improve consumer surplus through lower prices.[19] De Cornière & Taylor (2019) find that the efficiency of various policy interventions, such as imposing recommendation neutrality, transparency policies, or structural separation, depends on the market environment.



In response to the growing concerns around algorithms and their influence over consumers’ choices, courts and competition authorities have been deviating from the traditional “rule of reason” approach to adopt more stringent rules for digital platforms regarding self-preferencing practices. However, from a theoretical perspective, self-preferencing algorithms can have pro-competitive benefits. There is no consensus from the economic literature on whether pro-competitive benefits or possible anti-competitive considerations prevail in the context of self-preferencing algorithms used by digital platforms. Nor is there consensus on the welfare effects of policy intervention aimed at correcting bias in algorithmic recommendations. Determining the net impact of self-preferencing algorithms on competition and consumer welfare requires individualized analysis accounting for the workings of specific algorithms, competitive context, and market environment.

[1] Emilie Feyler and Dr. Veronica Postal are Consultants in NERA Economic Consulting’s Antitrust Practice in White Plains, NY. The opinions expressed are those of the authors and do not necessarily reflect the views of NERA Economic Consulting or other NERA experts.

[2] C. A. Gomez-Uribe & N. Hunt, The Netflix Recommender System: Algorithms, Business Value, and Innovation. 6 (4) ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 1–19 (2016).

[3] Pause Giant AI Experiments: An Open Letter, FUTURE OF LIFE INSTITUTE (March 22, 2023),

[4] Council Regulation (EC) 1/2003, Case AT.39740, Google Search (Shopping), 2017 O.J., available at

[5]  Sheng Li, Claire Chunying Xie & Emilie Feyler, Algorithms & Antitrust: An Overview of EU and National Case Law, Oct. 7, 2021, CONCURRENCES ANTITRUST CASE LAWS E-BULLETIN, Art. N° 102334.

[6] See the FTC’s guidance on tied sales: Single Firm Conduct: Tying the Sale of Two Products, FEDERAL TRADE COMMISSION, (last visited [May 9, 2023]). See also the EC’s Guidelines on Vertical Restraints, Guidelines on Vertical Restraints, EUROPEAN COMMISSION  61, available at

[7] See the CMA’s discussion on algorithms and competition: CMA, Algorithms: How They Can Reduce Competition and Harm Consumers (Jan. 19, 2021),

[8] FTC, Statement of the Federal Trade Commission Regarding Google’s Search Practices in the Matter of Google Inc., FTC File Number 111-0163, pp. 3-4 (2013); Press Release, EC, Antitrust: Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving Illegal Advantage to Own Comparison Shopping Service (June 27, 2017), available at

[9] Andrea Pomana, Germany’s Google Controls Illustrate Global Antitrust Trend, LAW360 (Jan. 21, 2022),; German Watchdog Probes Apple’s Market Dominance, BBC NEWS (June 21, 2021),; New Rules Apply to Meta (Formerly Facebook) – Bundeskartellamt Determines its “Paramount Significance for Competition Across Markets, BUNDESKARTELLAMT (May 4, 2022),

[10] Digital Markets Act: rules for digital gatekeepers to ensure open markets enter into force, EUROPEAN COMMISSION, (Oct. 31, 2022),

[11] Regulation (EU) 2022/1925 of the European Parliament and of the Council, OFFICIAL JOURNAL OF THE EUROPEAN UNION (Sept. 14, 2022),

[12] Open App Markets Act, S.2710, 117th Cong. (2021), available at; American Innovation and Choice Online Act, S. 2992, 117th Cong. (2021), available at

[13] Subcommittee on Antitrust, Commercial, and Administrative Law of the Committee on the Judiciary of the House of Representatives, 117th Congress 2d Sess., Investigation of Competition in Digital Markets (Comm. Print 2022),

[14] See Kittaka et al. (2023) for a detailed literature review. Yuta Kittaka, Susumu Sato & Yusuke Zennyo, Self-Preferencing by Platforms: A Literature Review, 66 JAPAN AND THE WORLD ECONOMY, 101191 (2023).

[15] Alexandre de Cornière & Greg Taylor, Integration and Search Engine Bias, 45 (3) RAND JOURNAL OF ECONOMICS, 576–597 (2014).

[16] Alexandre de Cornière & Greg Taylor, A Model of Biased Intermediation, 50 (4) RAND JOURNAL OF ECONOMICS, 854–882 (2019).

[17] Yusuke Zennyo, Platform Encroachment and Own-Content Bias, 70 (3) JOURNAL OF INDUSTRIAL ECONOMICS, 684–710 (2022).

[18] Andrei Hagiu, Tat-How Teh & Julian Wright, Should Platforms Be Allowed to Sell on Their Own Marketplaces?, 53 (2) RAND JOURNAL OF ECONOMICS, 297–327 (2022).

[19] Yuta Kittaka & Susumu Sato, Dual-Role Platforms and Self-Preferencing: Sequential Search Approach, SSRN (Oct. 13, 2022),