It is impossible to imagine digital services without algorithmic search and recommender systems. They have the important function of pre-sorting the flood of information online for users. However, they also harbor the risk of competitive distortions and ideological bias. At the European level, the Digital Services Act responds to this primarily with transparency obligations, which we analyze in this article from a law and economics perspective. We conclude that the approach neither prevents possible distortions of competition nor ideological media bias. Therefore, there is a risk that the DSA´s transparency requirements will remain a paper tiger.
By Oliver Budzinski & Madlen Karg[1]
I. INTRODUCTION
Among the various phenomena of the world of digital services, the channeling of users’ attention to a pre-selection of goods through algorithmic search and recommender systems (ASRS) represents one of the most important issues. On the one hand, information overload on the internet requires some pre-selection, on the other hand, the power of the algorithm raises doubts and fears about their impact on competition and society. The EU Digital Services Act (DSA) applies a cautious regulation of ASRS. In order to assess its adequacy from a law and economics perspective (sections 3 and 4), we first take a look into the economics behind these systems (section 2).[2]
II. THE ECONOMIC ROLE OF (ALGORITHMIC) SEARCH AND RECOMMENDATION SYSTEMS
A. Information Overload, Search Costs, and Decision-Making
Digital markets are usually characterized by information overload since the amount of goods and contents offered on the internet in general and on specific online marketplaces (à la Amazon), audio and video streaming services (e.g. Spotify, YouTube, Netflix, etc.) or in App Stores regularly exceeds the information processing capacities of users. Therefore, it is necessary that online services provide a pre-selection of the available items to users. Only this artificial reduction of the perceivable range of supply allows users to perform a rational consumption choice among commodities, services, and contents.
This pre-selection of contents is usually based on search and recommendation systems, often automatized through algorithms. In the case of search services, the initiative is with the user who provides a search inquiry and receives so-called hits as a response from the system. These hits are not presented in a random order; instead, they are ordered with the goal to provide the best fitting response first. As such, search systems include an element of recommendation through the immanent ranking of the hits. Pure recommendation systems proactively address the users and suggest to them further items that they may like to consume. The wide range of systems include “other users also bought”-style recommendations up to auto-play versions where the next recommended audio or video stream automatically starts after the chosen one has ended. Like the ranking in search systems, recommendation systems try to offer a best next choice option to the user and do not present items in a random order.
The ranking of search results and recommendations influences the choice of the users. The top-ranking positions receive significantly more attention than the items further down the order. Empirical studies confirm that most users only perceive the first 4-5 search hits or recommendation items and, thus, de facto only choose among these contents, commodities, and services.[3] The theoretical explanation refers to the scarcity of cognitive resources and transaction costs of choice. Rational users will not use unlimited cognitive resources to search for and choose among goods, especially not in situations of information overload. Instead, they stop the search and choice process as soon as a good or content is found that sufficiently satisfies their need (although it may not be the ultimately optimal good), thus, following a concept of “satisficing.”[4] How much cognitive resources users spend on a search and choice process depends on how important the respective good is for them: while routine consumption involves comparatively few cognitive resources and a satisficing level is quickly achieved, extraordinary consumption involves more thorough search and more careful choice decisions.[5]
Many online services individualize the ranking of search results and recommendations so that each user receives her individual ranking, based upon (i) personalized data about the user,[6] (ii) data about users that are to some degree similar, and (iii) general knowledge about popular contents. In other words, the underlying algorithms try to estimate the preferences of the individual user based upon the available data and provide a best-match ranking. The quality of the personalized ranking depends on data availability and algorithm intelligence. Generally, the systems work considerably better for mainstream preferences and for homogeneous niche interests than for diversity-preferring non-mainstream interests.
B. Welfare Effects
Economic research identifies three positive welfare effects of individualized ASRS:
- They provide a necessary pre-selection in the face of information overload and, thus, are a necessary condition for consumer choice.
- They provide rankings that approximate the preferences of the users, thus, contributing to a preference-oriented supply in the digital world.[7]
- Due to the individualization, they deliver a broader choice menu to the overall group of users since every user gets a different set of pre-selected items. Thus, an overall larger set of goods is brought to the attention of the users as a whole.
Alternative regimes struggle to provide these welfare effects. A random ranking fails to achieve the first two advantages.[8] A ranking decision by a human editorial board – apart from efficiency considerations – that provides a one-size-fits-all ranking like in the traditional media world of newspapers, magazines, radio, and television channels performs worse in the second and in the third welfare advantage, i.e. the outcome would represent a worse fit to user preferences and the range of pre-selected contents would be smaller.
Notwithstanding their beneficial effects, ASRS still present a barrier to market entry: only those items that get listed/ranked sufficiently prominent de facto participate in market competition. This generates gatekeeping power (already way below any accompanying market power), which can be (ab-)used:
- Self-preferencing comprises strategies where the ranking is employed to systematically up-rank the company’s own items and/or to systematically down-rank the items of competitors.[9]
- Media bias refers to the deliberate ideological biasing of ranking results regarding news items and/or cultural agendas.
These abusive strategies require a deliberate twisting of the algorithm to implement the ranking bias. The counter-effect, limiting gatekeeping power, would be users switching to competing services if they face artificial distortions of search and recommendation rankings. However, next to having an alternative, this requires that users realize gradual distortions of such rankings. This is unlikely because of the very logic of the usefulness of ASRS: due to systemic information overload, users cannot overview all potential offers and depend on selecting within the pre-selected commodities, services, and contents. Only a recurrent comparison of different services and their rankings could help identifying a gradual decrease in ranking quality due to artificial biasing. This, however, increases transaction costs and, thus, is rationally unlikely to be conducted in routine consumption situations (but may work for extraordinary consumption). Therefore, transparency requirements must be expected to be ineffective (in the majority case of routine consumption) if it is accompanied by increasing transaction costs.
Furthermore, the focus on the preferences of the users may lead to an issue that, by contrast, does not require any deliberate twisting of the algorithm:
Echo chamber effects and filter bubbles may be the result of the self-reinforcing character if ASRS provide users always with more of the same since these are their estimated preferences. The confrontation with new (types of) content – which may be either just disliked by an individual user or develop taste-building effects (i.e., detecting new things you like) – may not happen anymore. The frequency and amount of such effects – beyond the deliberate ignorance of a specific type of user actively pursuing the entrance into an echo chamber – is controversially discussed in the literature.[10]
III. THE REGULATION OF RECOMMENDATION RANKINGS IN THE DIGITAL SERVICES ACT
A. The Digital Services Act (“DSA”)
After a long drafting and negotiation process, the DSA was published in the Official Journal of the European Union on 27 October 2022.[11] It aims to better protect consumers and their fundamental rights online by establishing a transparency and accountability framework for online services. In addition to requirements for the moderation of user-generated content, it also addresses information distortions caused by ASRS by imposing transparency obligations.
The DSA applies to “intermediary services” offered to users that are located or have their place of establishment in the Union (Art. 2 (1) DSA). The due diligence obligations are adapted to the type, size, and nature of the intermediary service and increase gradually in four stages.[12] While only basic obligations apply to infrastructure providers such as internet access providers or domain name registrars, they expand for “hosting service providers” that provide cloud and web hosting services, and “online platforms” that bring sellers and consumers together, which include app stores or online marketplaces. For “very large online platforms” and “very large online search engines” with 45 million monthly active users in the EU (Art. 33 DSA), the DSA establishes the most stringent requirements, as they may pose particular risks for the distribution of illegal content and, thus, may cause societal harms.
B. Recommender Systems in the DSA
The DSA acknowledges that recommender systems have a significant impact on the ability of users to retrieve and interact with information online. It responds to the negative societal effects of ASRS with transparency requirements and obliges services to ensure that users are adequately informed about how recommendation systems affect the display of information. To achieve this, the functionality of a recommendation ranking as well as its parameters shall be explained in an easily comprehensible manner.[13]
Firstly, all providers of online platforms that use recommender systems shall set out in their terms and conditions, in plain and intelligible language, the main parameters used in their recommendation rankings, which includes at least the criteria which are most significant in determining the information suggested to the user, as well as the relative importance of those parameters (Art. 27 (1) and (2) DSA).
If several parameters may determine the relative order of information presented to users, providers of online services must make available a functionality that allows users to select and modify at any time their preferred option (Art. 27 (3) DSA). Very large online platforms and very large online search engines will be further obliged to offer users at least one option of the recommendation system which is not based on user preferences and personalized data (profiling) (Art. 38 DSA).
The impact of ASRS must also be explicitly included in the mandatory annual assessment of systemic risks by very large online platforms and very large online search engines (Art. 34 (2) (a) DSA). Besides the risk assessment, those providers also need to take measures to mitigate the systemic risks of their services. For this purpose, the testing and adaption of their ASRS is mandatory (Art. 35 (1) (d) DSA). Pursuant to Art. 40 (1) DSA and upon request, very large online platforms and very large search engines are required to grant access to data that is necessary to assess and monitor compliance with the DSA to the supervision and enforcement authorities (Art. 49 (2) DSA).[14] Also upon request, they must explain the design, logic, functioning, and testing of their ASRS (Art. 40 (3) DSA).The authorities can also order that data access is to be given to “vetted researchers” for the detection, identification, and understanding of systemic risks caused by very large online platforms or very large search engines (Art. 34 (1) DSA) and the assessment of the adequacy, efficiency, and impacts of the risk mitigation measures pursuant to Art. 35 DSA (Art. 40 (4) DSA). “Vetted researchers” are subject to various conditions (Art. 40 (8) DSA), which include a university affiliation, their independence of commercial interests, and their capability to fulfill data security and confidentiality requirements.
C. Transparency as a Regulatory Solution?
The transparency provisions for ASRS in the DSA envisage to promote user autonomy and enable informed choices by reducing information asymmetries between online service providers and users. They affect all online platforms that use algorithmic systems, with very large online platforms again being subject to more extensive obligations. In the light of their market dominance a size-based regulation concept is generally viewed to be appropriate.[15]
The DSA does not attempt to regulate (the diversity or pluralism of) recommendation ranking outputs but aims to empower users to make better-informed choices based on more information on how the algorithms process information. This stands in line with the inherent pro-diversity effect of individualized rankings and the mixed research results concerning echo chamber and filter bubble effects, which do not indicate the necessity of imposing diversity obligations on ASRS outputs (see section 2).
Instead, the DSA focuses on imposing transparency obligations. This is not done by any obligations to disclose algorithms, which (i) are trade secrets, (ii) would restrict competition for the best systems, and (iii) would not effectively help consumers due to the complexity of the matter. It is utopian to reach a level of algorithmic transparency, where it is possible for users to fully understand the logic of an algorithm – which often not even experts do. Thus, the question is whether the limited transparency provisions (as described in section 3.2) will actually empower users to better understand recommender rankings and/or detect artificial biasing. Instead, the transparency obligations may either turn out to be a paper tiger or a transaction costs-increasing tool that most users find annoying.[16]
The DSA obligations focus on disclosing the main parameters that determine the ranking results. On the one hand, this may be too narrow to effectively reduce information asymmetries and enable better-informed choices – or even a detection of biasing. The interdependence of user behavior (uploading, subscribing, consuming, (dis-)liking content, etc.) and the algorithmic output – which mutually influence each other – may not be captured by merely disclosing the main parameters of the algorithms.[17] On the other hand, the willingness of rational users to spend cognitive resources on information and customizing of ASRS are likely to be exhausted very quickly – at least for everyday routine consumption choices (see section 2).
Moreover, even if users get an insight into how recommender rankings work, this does not necessarily increase the probability that they switch to another service. While this is obvious in cases of market dominance of service providers (locking-in consumers),[18] gatekeeping-effects also occur outside the scope of traditional market dominance in less concentrated markets (see section 2). The regulatory goal of informed and autonomous user choices neglects the inherent information overload issues that make individual users dependent on a pre-selection service and give them little power to identify (gradually) suboptimal ranking results. Even if they dislike the way a recommendation ranking works, switching costs may be considerable. The concept of user autonomy based on transparency further burdens individuals with additional transaction costs: they are expected to seek and interpret information by themselves.[19] The higher the information costs of users, the greater the leeway tends to be for service providers.[20] Ironically, forcing users to recognize and deal with settings (e.g. by pop-ups preventing an uninformed use) also increases transaction costs, especially regarding routine consumption, and may be welfare-decreasing in this regard (see section 2).
Furthermore, in scenarios with personalized recommendation rankings, the disclosure of the main algorithm-parameters offers no insight to possible systemic biases in the algorithm output, since the ranking is different for each user.[21] In the past, researchers have tried to conduct studies surveying a large number of different user-outputs, but platform-providers have put in a lot of effort to prevent researchers to evaluate a larger base of algorithmic outputs across society.[22] At this point, the DSA provides an improvement: Data access for vetted researchers pursuant to Art. 40 (4) for systemic risk management and mitigation includes ASRS (Art. 34 (2) (a) and Art. 35 (1) (d)).
In addition to the disclosure of the main parameters that influence a recommender ranking, very large online platforms and very large search engines must offer an option of their ASRS that is not based on user preferences (Art. 38 DSA), which ultimately increases consumer choice on the system level.[23] While it is up to the individual service provider to pick an alternative (see section 2 for possible alternatives and their welfare effects), the most probable solution, is an algorithm-based display of the most popular content, which leads to the same content being displayed to every user who choses this option. An editorial selection looks unlikely since this is precisely what the platform providers do not claim to be, and random rankings would be accompanied by a considerable loss of quality in the search and recommender ranking service – up to the point of the search system being completely useless. From an economic point of view, an obligation to a non-personalized ranking option can lead to a reduction in the diversity of algorithm outputs (see section 2), which would be an undesirable regulatory side effect.
Depending on the design and implementation, users could be annoyed by a mandatory selection decision when forced to visit a website for a personalized/non-personalized ranking system (annoyance costs as a type of transaction costs). Such a design would not be economically beneficial as it would increase users’ transaction costs. Furthermore, the parameters that users can ultimately influence are only a fraction of what the algorithm processes, which could create a misleading image of transparency for users.[24] Overall, it may therefore be doubtful whether Art. 38 DSA will bring a desirable development regarding the comprehension of ASRS in addition to merely increasing consumers’ freedom of choice on the system level by the additional non-personalized option. Notwithstanding, an intelligent design of this additional option might benefit some consumers while not decreasing welfare for the majority of routine consumption decisions – and thus do no harm.
IV. CONCLUSIONS
Regulating ASRS provides a challenge of balancing beneficial effects with possible pitfalls and scope for abuse. The DSA provides a cautious regulatory approach that may not achieve a lot of effects from an economic perspective of rational choice but – depending on the design and implementation of the mandatory non-personalized option – is likely to leave the beneficial effects untouched. Still, understanding the behavior of users in choice situations (as outlined in section 2) is paramount to further develop any regulation of ASRS. Based upon our welfare analysis in section 2, we can summarize whether and how the DSA combats the downsides of ASRS:
- Does the DSA solve the problem of self-preferencing? The answer is a clear no. However, self-preferencing is explicitly addressed by his sister act, the Digital Markets Act, which prohibits self-preferencing in general. Unfortunately, the DMA obligation only applies to selected so-called core platform services and will not address many ASRS with gatekeeping effects.[25] The DSA would have been an option to extend the ban of self-preferencing beyond core platform services.
- Does the DSA solve the problem of (ideological) media bias? Pure transparency obligations are probably too weak for this problem. For news and news-related rankings, an obligation to consider the quality of a source within the ASRS may be a way forward despite the non-trivial issue of defining the right quality criteria.[26]
- Does the DSA solve the problem of echo chambers/filter bubbles? This cannot be expected as well but maybe they are not the most pressing problem, especially if the former issue is addressed.
[1] Oliver Budzinski is Professor of Economic Theory and the Director of the Institute of Economics at Ilmenau University of Technology, Germany. Madlen Karg is Research and Teaching Fellow at the Department of European Law and Public International Law at the University of Innsbruck, Austria.
[2] This article draws particularly on Budzinski, O., Gaenssle S. & Lindstädt-Dreusicke, N. (2022), Data (R)Evolution – The Economics of Algorithmic Search & Recommender Services, in: Baumann, S. (ed.), Handbook on Digital Business Ecosystems (Edward Elgar), pp. 349-366 and Budzinski O., Karg, M. (2023), Gatekeeper, Marktmacht und die Regulierung von Onlinediensten, Staatswissenschaftliches Forum, 6 (1), forthcoming, which deliver more in-depth analyses of the issues discussed here.
[3] Inter alia, Pan, B., et al. (2007), In Google We Trust: Users’ Decisions on Rank, Position and Relevancy, Journal of Computer-Mediated Communication, 12 (3), pp. 801-823.
[4] Simon, H. A. (1955), A Behavioral Model of Rational Choice, The Quarterly Journal of Economics, 69 (1), pp. 99-118; Güth, W. (2010), Satisficing and (Un)Bounded Rationality: A Formal Definition and Its Experimental Validity, Journal of Economic Behavior and Organization, 73 (3), pp. 308-316; Caplin, A., Dean, M. & Martin, D. (2011), Search and Satisficing, American Economic Review, 101 (7), pp. 2899-2922; Güth, W., Levati, M. V. & Ploner, M. (2012), Satisficing and Prior-free Optimality in Price Competition, Economic Inquiry, 50 (2), pp. 470-483.
[5] Vanberg, V. J. (1994), Rules and Choice in Economics (Routledge); Budzinski, O. (2003), Cognitive Rules, Institutions and Competition, Constitutional Political Economy, 14 (3), pp. 215-235. Examples for routine consumption would be for many consumers the choice of washing powder in the supermarket, music for easy listening, or videos to calm down from a hard day’s night. By contrast, more cognitive resources may be invested to the planning of a special holiday trip or media content for a special evening. Individuals differ a lot here, of course.
[6] Personalized data usually consists of standard identification data, behavioral data like revealed preferences (for instance, through online shopping and individual search/browsing histories) and stated preferences (like ratings, likes, follows, comments, etc.), and derived data combining the former categories complemented with data of similar individuals (Budzinski, O., Kuchinke, B. A. (2020), Industrial Organization of Media Markets and Competition Policy, in: Rimscha (ed.), Management and Economics of Communication (DeGruyter), pp 21-45).
[7] For empirical evidence see, inter alia, Thurman, N., et al (2019), My Friends, Editors, Algorithms, and I, Digital Journalism, 7 (4), pp. 447-469.
[8] Evidence can easily be produced by self-experimenting: try to only use page 50 or 100 of the search items for every search inquiry. For many inquiries, no useful hit will be found.
[9] With further references see, for instance, Bougette, P., Budzinski, O. & Marty, F. (2022), Self-Preferencing and Competitive Damages: A Focus on Exploitative Abuses, The Antitrust Bulletin, 67 (2), pp. 190-207.
[10] See Gentzkow, M. A., Shapiro, J. M. (2011), Ideological Segregation Online and Offline, Quarterly Journal of Economics, 126 (4), pp.1799-1839; Zollo, F., et al (2015), Debunking in a World of Tribes, in: arXiv:1510:04267; Schnellenbach, J. (2018), On the Behavioral Political Economy of Regulating Fake News, ORDO, 68 (1), pp. 159-178.
[11] Regulation (EU) 2022/2065 of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1 .
[12] DSA, Recital 41.
[13] DSA, Recital 70.
[14] These are the European Commission as well as in each member state the Digital Services Coordinator of establishment.
[15] Leerssen, P. (2020), The Soap Box as a Black Box: Regulating Transparency in Social Media Recommender Systems, European Journal of Law and Technology, 11 (2), p. 47.
[16] Do the ubiquitous cookie setting pop ups in Europe really improve online activities?
[17] Rieder, B., Matamoros-Fernández, A. & Coromina, Ó. (2018), From ranking algorithms to ‘ranking cultures’: Investigating the modulation of visibility in YouTube search results, Convergence, 24 (1), pp. 50-68, Leerssen, P. (2022), Algorithm Centrism in the DSA’s Regulation of Recommender Systems, VerfBlog, 2022/3/29, DOI: 10.17176/20220330-011148-0.
[18] Leerssen, P. (2020), The Soap Box as a Black Box: Regulating Transparency in Social Media Recommender Systems, European Journal of Law and Technology, 11 (2), p. 25.
[19] Edwards, L., Veale, M. (2017), Slave to the Algorithm? Why a ‘Right to Explanation’ is probably not the remedy you are looking for, Duke Law & Technology Review, 16 (1), pp. 18-84 (67); Ananny, M., Crawford, K. (2018), Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability, new media & society, 20 (3), pp. 973-989 (979).
[20] Schweitzer, H., et al (2018), Report for the Federal Ministry for Economic Affairs and Energy (Germany), n. 220; see also Scott-Morton, F., et al (2019), Report of the Committee for the Study of Digital Platforms – Market Structure and Antitrust Subcommittee, pp. 35-38.
[21] Leerssen, P. (2022), Algorithm Centrism in the DSA’s Regulation of Recommender Systems, VerfBlog, 2022/3/29, DOI: 10.17176/20220330-011148-0.
[22] Heldt, A., Kettemann, M. C. & Leerssen, P. (2020), The Sorrows of Scraping for Science: Why Platforms Struggle with Ensuring Data Access for Academics, VerfBlog, 2020/11/30, DOI:10.17176/20201130-220222-0.
[23] Helberger, N., et al (2021), Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath, Internet Policy Review, accessible at: https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large.
[24] Helberger, N., et al (2021), Regulation of news recommenders in the Digital Services Act: empowering David against the Very Large Online Goliath, Internet Policy Review, accessible at: https://policyreview.info/articles/news/regulation-news-recommenders-digital-services-act-empowering-david-against-very-large.
[25] See on the DMA and gatekeeping power: Budzinski, O., Mendelsohn, J. (2022), Regulating Big Tech: From Competition Policy to Sector Regulation? (Updated October 2022 with the Final DMA), http://dx.doi.org/10.2139/ssrn.4248116.
[26] See also Möller, J., et al (2018), Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity, Information, Communication & Society, 21 (7), 959-977; Helberger, N. (2019), On the Democratic Role of News Recommenders, Digital Journalism, 7 (8), 993-1012.