In most digital platforms recommender systems provide consumers with recommendations across a variety of contexts. While recommender systems generate efficiencies by lowering the cost and improving the quality of product discovery, their impact on individuals’ purchase and consumption has the potential of affecting downstream competition of products and industries. These systems may also present sensitive issues for national security, democracy, and public health. Recommender systems have therefore come under increasing scrutiny from governments around the world in recent years. The scale of efficiencies and benefits offered by recommender systems motivates their continued use and expansion in the future. In this paper we explore approaches that merge innovation and regulation as part of technological advancement. We offer an approach built on increased transparency on the side of companies regarding both their data and algorithms, as well as through collaborations between digital platforms, academics, and regulators. By taking responsibility for regulating their recommender systems in the short-term, companies will be well-positioned to reap long-term benefits and to serve as leaders in the ecosystem. Improved regulation and monitoring by external bodies will also help cultivate the market. With digital regulations of these systems still being in a relatively nascent stage adopting these types of approaches can help shape a safe, competitive, and innovation-driven future.

By Rohit Chatterjee, Bartley Tablante, Sean Durkin, Anurag Gandhi, Abby Drokhlyansky & Marco Iansiti[1]

 

I. INTRODUCTION

In most digital platforms recommender systems provide consumers with recommendations across a variety of contexts (e.g. list of products in the “You might also like” section on Amazon). While recommender systems generate efficiencies by lowering the cost and improving the quality of product discovery, their impact on individuals’ purchase and consumption has the potential of affecting downstream competition of products and industries. Additionally, these systems may also present sensitive issues for national security, democracy, and public health. Unsurprisingly, recommender systems have therefore come under increasing scrutiny from governments around the world in recent years.

Despite the associated risks, the scale of efficiencies and benefits offered by recommender systems motivates their continued use and expansion in the future. In this paper we explore approaches that merge innovation and regulation as part of technological advancement. We offer an approach built on increased transparency on the side of companies regarding both their data and algorithms, as well as through collaborations between digital platforms, academics, and regulators. By taking responsibility for regulating their recommender systems in the short-term, companies will be well-positioned to reap long-term benefits and to serve as leaders in the ecosystem. To further address the potential externalities of recommender systems, improved regulation and monitoring by external bodies will also help cultivate the market. With digital regulations of these systems still being in a relatively nascent stage adopting these types of approaches can help shape a safe, competitive and innovation-driven future.

 

II. RECOMMENDER SYSTEMS ENABLE PROVISION OF PERSONALIZED CONTENT

Recommender systems are one of the most common types of machine learning systems, with billions of consumer interactions each day. Recommender systems provide consumers with recommendations across a variety of contexts, including consumer retail product recommendations, music, movies, television, and social media content. In certain contexts (e.g. social media), recommender systems maybe commonly referred to as “the algorithm” that recommends content on online digital platforms.

Recommender systems are an innovation that generates efficiencies by simplifying and lowering the cost of product discovery. For example, Amazon uses a recommender system to suggest products a user may wish to purchase based on current and historical product views and purchases, and it offers this as an Amazon Personalize product to other companies. Spotify uses a recommender system to generate “made for you” playlists of new music based on the listening history of users and those similar to them. Facebook’s News Feed, Twitter, and TikTok’s “For You Page” all use recommender systems to determine which posts, tweets, or short videos to surface to users based on their engagement with prior content.

Recommender systems also have the potential to affect competitive dynamics, changing the products that individuals purchase and consume, affecting consumer preferences and changing long-term behavior in ways that are not yet well-understood. Due to the prevalence of recommender systems and their uncertain long-term effects, some regulators have promulgated regulations, proposed to regulate, or even suggested outright bans on the use of recommender systems in certain contexts.

A more nuanced approach to controlling the effects of recommender systems than overarching bans could still mitigate harms without precluding the tremendous efficiencies recommender systems create for consumers. Recent work in economics, computer science, and management science offers insights for how to further investigate recommender systems while preserving the benefits this innovation offers.

 

III. RECOMMENDER SYSTEMS TURN HISTORIC DECISIONS INTO PERSONALIZED RECOMMENDATIONS

Unlike humans, computer systems cannot easily or effectively reason in the abstract about the meaning of a work or its suitability to a given audience. Computer scientists have developed two methods which computers can perform that are mathematically simple but yield highly effective recommendations.

The first approach is to recommend products or content similar to what a user has enjoyed in the past. This is known as content-based filtering. It is relatively fast, so it scales up well to millions of users, and it does not require knowing anything beyond the preferences of the specific user in question. However, it requires carefully modeling the type of content being provided by hand-engineering “features,” or aspects of the book or movie upon which the recommendation will be based, for example “genre: horror.”

The second approach is to recommend content similar to what other similar users have watched or enjoyed in the past. This approach is known as collaborative filtering. Collaborative filtering can also scale up with sufficient computational power and avoids the need for hand-engineering features by learning what aspects of the content are important directly from the data itself. These two approaches to recommender system implementations can be used independently or in tandem to maximize monetization, data, and user engagement on a platform.

A product manager exploring technologies for example, may choose to trade investments in data and modeling for improved engagement, increased monetization, and even more data. In this example, the data required for recommender systems are user interactions with content. Several factors, including user heterogeneity and content heterogeneity influence the volume and type of data required. As an example, in 2017, when Netflix replaced the 5-point rating scale with a binary “thumbs up” or “thumbs down”, that feature change resulted in significantly more responses from viewers on their platform.

Companies use the output from recommender systems in several ways:

  • To select a specific product or content to recommend
  • To determine the ordering in a list of products or content
  • To select a customized menu of products as a bundle or sequential offering (for example, Amazon’s “Frequently bought together” option)

Successful recommender systems provide products which a consumer will purchase or content with which they will engage. This has several benefits for the implementor including:

  • Increased transactions which can directly increase monetization if algorithmic output results in financial transactions. These sales can be either direct like in ecommerce sales or indirect from advertising.
  • Increased satisfaction from using the product, which can result in greater use by individual users, and marginal users engaging with the recommendations.
  • Increased data to feed back into the recommender system to improve the accuracy of future recommendations, creating a positive feedback loop.

Because of the significant benefits available from improved recommender systems, companies invest significantly in their design. For example, in 2006 Netflix announced the Netflix Prize, an open challenge inviting teams to improve Netflix’s recommendation engine and offering a million dollars as a reward.[2] The benchmark was to improve predictive power by 10 percent, which essentially entailed reducing the root mean squared error of recommendations.

The Netflix prize showcased two key aspects of AI advancement. First, that external teams could assist a technology-first company like Netflix on the impact of its algorithms. Second, that large quantities of user data could be tagged with a finite set of features that an algorithm could be trained on to eventually make successful recommendations. The benefits of recommender systems motivate their prevalence, enabling platforms to target users in a way that enables content personalization at an unprecedented scale. Recommender systems also, however, entail certain caveats that require consideration to determine the full scope of associated risks.

 

IV. RECOMMENDER SYSTEMS CREATE RISKS

Potential harms that recommender systems can generate are complex, and there is no common consensus on how to mitigate these harms. One of the potential harms that applies broadly across fields is that recommender systems can create filter bubbles, as described by Prof. Michael Kearns in his book The Ethical Algorithm. By selectively serving information a user may find engaging and not serving contradictory viewpoints, recommender systems reinforce users’ existing worldviews and biases; this intellectual isolation is called a filter bubble.

There is emerging evidence from the fields of management science, industrial economics, healthcare, and computer science that recommender systems may cause impacts across a variety of domains of interest to policy makers. These domains include competition, national security, privacy, and public health.

A. Effects on Competition

Recommender systems can impact competition in two main ways. First, recommender systems may exhibit strong network effects, and the recommendations they issue may be reliant upon data that is of sufficient quality, scale, scope, and uniqueness to present a durable barrier to competition. For example, despite a large social network with which to bootstrap its launch, Instagram’s Reels has struggled to compete with TikTok, a product based on a short form video recommender system. In this case, the next generation of AI-based digital platforms may exacerbate the winner-take-all nature of previous generations of online digital platforms.

Second, recommender systems may allow for various forms of self-preferencing, wherein a platform’s recommender system biases toward recommendations for a platform’s own products or services. Because recommender systems are generally not transparent in design to either users or regulators, such efforts may go undetected.

B. Effects on National Security

Recommender systems pose several potential harms to national security. Recommender systems may increase political polarization and extremism, particularly when designed to optimize platform engagement. For example, early recommender systems may have steered users interested in the politics of Islamic states toward increasingly extremist content.

Recommender systems controlled by one nation-state may present an opportunity for covert influence over the citizens of another nation state. For example, several nations have banned or conducted national security reviews of ByteDance’s TikTok application.[3] Industry analysts at the time feared that videos pushed by the platform’s recommender system could steer unwitting users toward views supportive of China’s government.[4] In September 2020, the DOJ filed an explanation of its proposed ban of TikTok, alleging that the application “is a mouthpiece for the CCP in that it is committed to promoting the CCP’s agenda and messaging.”[5] This ban was overturned[6] by a Federal judge in December 2020, before the Biden administration ultimately agreed to drop the litigation against TikTok in June 2021, pending a review by the Commerce Department.[7]

C. Effects on Privacy

Recommender systems may unintentionally leak information about one user’s prior behavior to other users of the system. For example, one user visiting a platform from the same internet connection as another user who viewed privacy-sensitive content may see that content recommended to them.

D. Effects on Public Health

The usage of social media may be correlated with negative psychiatric effects such as increased anxiety and depression, particularly in children and young adults. While the state of causal evidence remains unclear, these potential effects have prompted government investigations. Some platforms, such as TikTok,[8] have recognized that optimizing content for personalization and relevance results in homogenous streams of content that can have addictive properties.

 

V. ALTHOUGH IN EARLY STAGES, RECOMMENDER SYSTEMS ARE UNDER INCREASING SCRUTINY FROM REGULATORS ACROSS THE WORLD

Recommender systems have come under increasing scrutiny from governments around the world in recent years. Digital regulations are in a relatively nascent stage and lawmakers in different countries are exploring various potential solutions that address both the creation and dissemination of content on digital platforms. 

A. United States

In the United States, former and current employees of the largest social media platforms have blown the whistle on concerns such the spread of misinformation and polarization. In response, a bipartisan group of lawmakers drafted the Filter Bubble Transparency Act,[9] which would require platforms to let people use a version of their services where content is not selected by “opaque algorithms” driven by personal data. The requirements stipulated in this law do not apply when algorithmic ranking systems use personal data “expressly provided” by the user “to determine the order or manner in which information is delivered to them” (such as search terms, filters, speech patterns, saved preferences, social media profiles and content followed by the user.) The Senate version of this bill[10] would empower the Federal Trade Commission to enforce the new regulations and impose civil penalties.

Another bill introduced in the House last year is the Justice Against Malicious Algorithms Act,[11] which would amend section 230 of the Communications Act of 1934 and limit liability protections granted to providers of internet services. If this bill were to pass, platforms could face lawsuits for allegations of serving false, misleading, and dangerous information to their users.

B. China

In China, the Cyberspace Administration is developing a set of regulations which will govern the design of recommender systems and give users the ability to stop platforms from using their data. They announced in a statement late last year that companies must abide by business ethics and principles of fairness and should not set up algorithmic models that entice users to spend large amounts of money or to spend money in ways that may disrupt public order.[12] These new regulations are the first of their kind and will require tech companies operating in China to make significant investments in compliance and change the way they operate. Whereas critics of these regulations claim that they could result in infringements on free speech, it is noteworthy that China is a global leader in the regulation of the AI algorithm space.

C. European Union

The European Union is also developing its own set of regulations under the broader umbrella of the Digital Services Act (“DSA”), which aims to create a safer digital space where the rights of users are protected, and businesses can compete on a level playing field.[13] Article 29 of the draft DSA requires Very Large Online Platforms (“VLOPs”) to set out in their terms and conditions the main parameters used in their recommender systems, as well as any options for users to modify those parameters, including at least one not based on profiling.[14]

 

VI. UNLOCKING THE VALUE OF RECOMMENDER SYSTEMS REQUIRES FURTHER STUDY OF BEHAVIORAL AND SOCIETAL OUTCOMES

In the absence of a regulatory consensus, we recommend steps to gather additional data incentivizing cooperation, increasing transparency, and expanding joint efforts among digital platforms, regulators, and academia. Such efforts can enable policy makers to better understand and address the potential harms of recommender systems while preserving the efficiencies they offer.

Before the issues with recommender systems can be understood well enough to approach solutions, the clashing incentives between digital platforms, academics, and regulators must be aligned.

Academics possess robust tools and theoretical approaches which may ameliorate harms but lack the access to data and systems to test their hypotheses. Currently, academics are constrained to conduct analyses based primarily on external observations of a recommender systems’ operation and limited tools such as user surveys. These methods can often reveal correlational evidence, but struggle with problems of reverse causality. For example, teens who use social media with high frequency may suffer higher rates of depression, but it may be the depression causing the social media use rather than the inverse.

Digital platforms possess the data and systems necessary to measure the effects of recommender systems on various outcomes of interest to policy makers. However, because the costs of recommender system malfunction are externalized to users and society at large, digital platforms have little incentive to investigate these issues. Even those digital platforms wishing to investigate societal harm fear increasing their own liability if their own investigations were to substantiate regulatory concerns.

By instead choosing to invest in the transparency of their recommendation algorithms, companies can position themselves in a way that prepares them for potential future regulations, promotes sustainable collaborations with external parties, generates user trust and satisfaction, and prevents undesirable externalities in the long-term. For example, platforms can conduct and publish regular audits of their recommender systems and monitor their performance to proactively avoid causing harms. By leveraging transparent partnerships with academic researchers and regulators, companies can prevent information silos and receive cross-functional input to prevent potential harms across fields. For example, a recommender system causing privacy harms could also impact children’s mental health, and without input from experts in both domains, this could result in duplicative efforts to address the two issues separately.

Digital platforms should adopt an approach of maximal public transparency and regulatory cooperation in the near term. Since digital platforms exist in the same societal substrate as their users, they rely upon the continued stability of their users’ societies for their long-term profitability. It is therefore in the long run self-interest of digital platforms to investigate societal harms. Investing in best practices for recommender system regulations and algorithmic transparency in the short-term will enable digital platforms to be well-positioned to respond to future regulations, ensure survivability, and garner positive publicity.

Regulators also have a number of tools at their disposal to encourage companies to better investigate and document potential issues arising from recommender systems. As one example, privacy regulators can investigate and where necessary litigate to address privacy harms arising from recommender systems.

The greatest potential harms of recommender systems, those related to polarization, extremism, and mental health, are also the most diffuse. As a result, their extent is difficult to establish. Because of the potential political impacts of the effects of recommender systems in these areas, there is high potential for accusations of bias and mismanagement. Research into these issues needs to be conducted in a thoroughly scientific non-partisan fashion for these harms to be effectively addressed.

Legislation can both increase the immediacy of long-run societal costs through fines and lower the barriers to transparency by creating mechanisms for review such as audits. Legislation can also create historical exceptions for digital platforms for prior harms caused by recommender systems, in exchange for greater future regulatory access to necessary data and tools.

Understanding the potential harms caused by widely deployed new technologies, including recommender systems, is an ongoing effort. As these problems are complex and multi-disciplinary, input from a variety of stakeholders is necessary to approach solutions.

 

VII. COMPANIES SHOULD TAKE THE OPPORTUNITY TO IMPLEMENT REGULATIONS OF RECOMMENDER SYSTEMS

Since regulations are not yet in force, now is the time for corporations to develop best practices and sustainable standards for their approaches to recommender systems. Corporations have the opportunity to lead in establishing comprehensive programs integrated within their business operations to generate exemplary industry protocols for all. Several key components to consider for such a program are: Inventory, Metrics, Compliance and Governance, and Transparency and Education.

  • Inventory: Firms can maintain an inventory of all recommendation algorithms used on their platform, along with their corresponding training datasets. This would provide a resource for tracking ecosystem effects over time and auditing algorithmic performance and biases.
  • Audit and Measurement: Firms can develop metrics, such as tracking content homogeneity or assessing time spent by users on the platform as a check for addiction, to measure how their recommendation algorithms affect the ecosystem.
  • Compliance and Governance: Firms can develop a holistic strategy for the role AI/ML will play within the organization and clear reporting structures that allow for multiple checks of recommender systems before and after they are put into production use. Firms can actively track the latest regulations in all jurisdictions where they operate and adapt their recommendation algorithms in compliance with local law or to address user complaints.
  • Transparency and Education: Firms can allow users easy access to details of how their data is being used and why they are being recommended certain content. They can also make such information easily accessible for academics, non-profits, and regulators. Where information transfer alone is not sufficient to answer key public policy questions, digital platforms should work with regulators to structure analyses that can be conducted using companies’ tooling and data in a manner sensitive to user privacy.

Building off these foundational components, firms can develop a continuous feedback loop to further hone their recommender system risk management programs (See Figure 1). This feedback loop has three steps – build an inventory of recommendation algorithms, conduct regular conformity assessments and audits, and establish a governance system.

Figure 1: Feedback loop to improve risk management program

 


[1] Keystone Strategy. Marco Iansiti is also affiliated with Harvard Business School, Harvard University. The authors would also like to disclose that Keystone Strategy has worked with most digital platform firms, including Amazon, Meta and Microsoft as well as a number of regulatory agencies involved in platform related regulation or controversy, including the US Department of Justice and other U.S. and European regulatory authorities and governing bodies.

[2] https://lsa.umich.edu/social-solutions/diversity-democracy/oci-series/excerpts/volume-ii/the-netflix-prize.html.

[3] https://www.axios.com/2020/08/06/tiktok-bans-worldwide-china.

[4] https://stratechery.com/2020/the-tiktok-war/.

[5] TikTok Inc. v. Trump, DOJ Memorandum in Opposition to Motion for Preliminary Injunction, https://www.documentcloud.org/documents/7218230-DOJ-s-MEMORANDUM-in-OPPOSITION-to-TIKTOK.html.

[6] https://abcnews.go.com/Politics/wireStory/judge-blocks-trumps-tiktok-ban-app-limbo-74604450.

[7] https://www.pacermonitor.com/view/LHZIOGA/TIKTOK_INC_et_al_v_TRUMP_et_al__dcdce-20-02658__0071.0.pdf?mcid=tGE3TEOA.

[8] https://newsroom.tiktok.com/en-us/how-tiktok-recommends-videos-for-you.

[9] https://www.govtrack.us/congress/bills/117/s2024/text.

[10] https://www.thune.senate.gov/public/_cache/files/27e9a4ad-1d45-4191-8e46-8694dc5b0bbe/F1482EA8F7D55FB810C2D651F784C490.lyn21514.pdf.

[11] https://www.congress.gov/bill/117th-congress/house-bill/5596/text.

[12] https://mp.weixin.qq.com/s/XdQVqqjJdLRlL0p6jlbwsQ.

[13] https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.

[14] Draft DSA, pp. 61-62.