The EU Digital Markets Act (“DMA”) contains several provisions which reflect important behavioral insights, and in particular the importance of choice architecture for end user decision-making. This article discusses three roles played by such insights. First, several DMA obligations address conduct whose anticompetitive effects arise from the interlinkage between choice architecture and end user behavior. Second, certain DMA obligations more explicitly cover the choice architecture facing users. Third, the heavy emphasis on effectiveness within the DMA creates a potential role for behavioral insights. If gatekeepers are to be effective in promoting fairness and contestability, to comply with the DMA, then they may need to do more to address behavioral biases than the provisions state explicitly “on their face.” But does the DMA go far enough in considering the implications of behavioral economics? Arguably not. This article also describes some residual questions and challenges arising where more clarity could be given or more could usefully be done.

By Amelia Fletcher[1]

 

The way in which options are presented to people – the so-called “Choice Architecture” they face – can have a dramatic impact on their choices. This key behavioral insight is increasingly well understood and is having ramifications across many policy areas. The UK Competition and Markets Authority recently published a report discussing the implications of online choice architecture for consumer protection and competition policy (CMA, 2022),[2] and we can also see its impact in the new EU Digital Markets Act (“DMA”).

This heightened focus on choice architecture reflects the growing recognition that individuals exhibit behavioral biases, which in turn arise from their cognitive limitations. We are not super-calculating fact-based machines. Rather, we think about things subjectively, have limited attention, and utilize rules of thumb.

This toolkit serves us reasonably well most of the time. It can be perfectly sensible to accept default options, choose the highest ranked or more prominent recommendations, or stick with the status quo. However, our tendency to do so can also lead us astray.  Because our behavior tends to exhibit systematic biases, knowledge of these can be exploited by others. As a simple example, if a firm knows I have a strong predilection for accepting the default option, this can potentially be used to sell me products I don’t need, or to discourage me from searching even when I do. The US Federal Trade Commission recently found that such so-called “dark patterns” are on the increase online (FTC, 2022).[3]

The EU Digital Markets Act imposes new rules on a small set of the largest “gatekeeper” platforms. Recognizing the limited attention of their end users, these platforms have worked hard to make the consumer journey as smooth as possible. This can be beneficial. The use of defaults, for example, can be helpful in reducing the number of active decisions end users have to make, and so ease the adoption of innovative new services. However, it can also be harmful. The EU’s 2018 Google Android decision[4] (recently upheld by The General Court)[5] found that the use of such defaults in the presence of end user “status quo bias” can enable leverage of market power from one service into another.

During the final stages of negotiations on the DMA, A variety of changes were made which more firmly embedded behavioral insights within the regulation. These changes are broadly positive. This short article discusses three key ways in which insights relating to choice architecture and behavioral biases underpin the final version of the DMA:

  1. Several DMA obligations seek to address conduct whose anticompetitive effects arise from the interlinkage between choice architecture and user behavioral biases.
  2. Certain DMA obligations more explicitly cover the choice architecture facing users.
  3. The heavy emphasis on effectiveness within the DMA also creates a potential role for behavioral insights. If gatekeepers are to be effective in promoting fairness and contestability, to comply with the DMA, then they may need to do more to address behavioral biases than the provisions state explicitly “on their face.”

But does the DMA go far enough in considering the implications of behavioral economics? Arguably not. This article describes some residual questions and challenges arising where more clarity could be given or more could usefully be done.

 

I. DMA OBLIGATIONS TO ADDRESS ANTICOMPETITIVE EFFECTS ARISING FROM CHOICE ARCHITECTURE

Certain DMA requirements are designed to address conduct, the anti-competitive effects of which are underpinned by the interlinkage between choice architecture and behavioral biases.

For example, a key behavioral insight is that individuals can be highly – and unduly – influenced by ranking and salience. This was important in the 2017 Google Shopping decision,[6] which sets out how Google was able to leverage its position in general search by demoting rival shopping sites down its search rankings (exploiting “ranking bias”) and making its own Shopping Box highly prominent (exploiting “saliency bias”). Likewise, the ongoing Amazon Buy Box case[7] has its (alleged) anticompetitive effect because consumers have a strong tendency to use the salient offer in Amazon’s Buy Box and are far less likely to scroll down or click through to find alternative offers.

Recognizing this vital importance of ranking for end user decision-making, Article 6(5) of the DMA requires that “the gatekeeper shall not treat more favourably, in ranking and related indexing and crawling, services and products offered by the gatekeeper itself than similar services or products of a third party. The gatekeeper shall apply transparent, fair and non-discriminatory conditions to such ranking and related indexing and crawling.” Recital (52) clarifies that this also covers “instances where a core platform service presents or communicates only one result to the end user.”

Similarly, the EU’s ongoing Apple App Store case[8] partly relates to Apple’s “anti-steering provisions,” which limit the ability of app developers to inform end users of alternative purchasing possibilities outside of apps. Such restrictions restrict competition to the app store by creating both informational and behavioral barriers – they both limit end users’ awareness of alternative purchasing possibilities and make it harder for them to access them.

Again, this concern is addressed by the DMA. Article 5(4) prohibits such provisions, while Article 5(5) ensures that purchases made outside of apps can be used smoothly.

 

II. DMA OBLIGATIONS THAT MORE EXPLICITLY COVER THE CHOICE ARCHITECTURE FACING END USERS

The role of behavioral insights within the DMA provisions described above is somewhat implicit. However, there are certain DMA obligations which more explicitly cover the choice architecture facing end users.

These essentially take two forms. First, and most prevalent, are a set of provisions that require the gatekeeper to enable end users to switch services. While these are primarily about reducing switching costs, additional wording was added to the DMA at a late stage that has a more behavioral bent. It is no longer simply required that switching is possible, but also that it is easy. For example:

  • Articles 6(3) requires that the gatekeeper shall allow and technically enable end users to easily change default settings” in relation to search engines, web browsers and virtual assistants, while Article 6(4) imposes a similar requirement in respect of third party software apps or app stores.
  • Article 6(3) requires that end users should be able “to easily uninstall” any apps.
  • Article 6(13) requires them to ensure that conditions of termination provision “can be exercised without undue difficulty.”
  • The wording in Articles 6(6) takes slightly different form, but arguably comes to the same thing. Gatekeepers are required not to “restrict technically or otherwise the ability of end users to switch between, and subscribe to, different software applications and services.” (All underlining added).

While they may seem innocuous, the terms “easily,”without undue difficulty” and “or otherwise” are important. We know that real end users are unlikely to act in the way that the regulation intends if it is in any way difficult to do so. There is also ample evidence that gatekeepers are well positioned to tweak the choice environment, sometimes subtly, to make such actions harder, rather than easier. This final terminology should help to prevent this.

The second set of obligations go further. They recognize that it may not be sufficient to enable end users to make choices, or even to make them easily. End users may exhibit such strong “status quo bias” that they still fail to act. And if they fail to act, then the interventions will not have their desired impact on fairness and contestability.

This issue is addressed by facilitating the use of prompts and requiring some use of choice screens, which force end users to make an active choice. Specifically:

  • Under Article 6(4), gatekeepers must allow third party providers of apps and app stores to prompt end users to decide if they wish to make that app or app store their default. Such prompts are expected to help to overcome “status quo bias” and really shift end user choices.
  • Under Article 6(3), gatekeepers must require end users to choose – from a list of the main available service providers ­– their online search engine, virtual assistant, or web browser, at the time of their initial use. Such a choice screen is designed to prevent gatekeepers from benefitting from “default bias” by setting their own services as defaults.

 

III. BEHAVIORAL INSIGHTS AND THE “EFFECTIVENESS” PROVISIONS OF THE DMA

A third potential linkage between behavioral economics and the DMA lies in the DMA’s heavy emphasis on “effectiveness.” Under the DMA, effectiveness does not simply relate to whether an obligation is formally achieved in itself. For an obligation to be met, it must also be effective in achieving the DMA’s overall objectives of fairness and contestability.

This is seen in the overarching compliance framework, as set out in Article 8.

  • Article 8(1) states that: “The gatekeeper shall ensure and be able to demonstrate compliance with the obligations laid down in Articles 5, 6 and 7 of this Regulation. The measures implemented by the gatekeeper to ensure compliance with those Articles shall be effective in achieving the objectives of this Regulation and of the relevant obligation.”
  • Article 8(2) enables the Commission to specify “the measures that the gatekeeper concerned is to implement in order to effectively comply with the obligations,” and Article 8(7) states that in doing so, “the Commission shall ensure that the measures are effective in achieving the objectives of this Regulation and the relevant obligation.”

It is noteworthy that in both underlined sections in the previous bullets, the wording “of this Regulation and” was added in the final wording of the Regulation, presumably to make absolutely clear that effectiveness was to be viewed in the context of the overall objectives of fairness and contestability.

The focus on effectiveness is also seen within individual obligations. Specifically:

  • Article 6(4) requires gatekeepers to “allow and technically enable the installation and effective use of” third party apps and app stores.
  • Article 6(7) requires gatekeepers to allow effective interoperability”
  • Article 6(9) requires gatekeepers to provide effective portability of data,” including “tools to facilitate the effective exercise of such data portability.”
  • Article 6(10) requires data access for business users that is effective, high-quality, continuous and real-time.

This focus on effectiveness within the DMA does not make explicit reference to behavioral considerations. However, they seem likely to be critical in practice.

Indeed, choice architecture is explicitly addressed in Article 13 which relates to anti-circumvention measures. Here, gatekeeper platforms are specifically prohibited from using behavioral techniques or interface design to undermine effective compliance. This includes a prohibition on making the exercise of end user choice unduly difficult by “offering choices in a non-neutral manner,” or “by subverting end users or business users’ decision making via the structure or design of a user interface.”

How might this emphasis on effectiveness play out in practice?

To consider this, consider the end user data portability requirement under Article 6(9). As is discussed in Recital 59, data portability will be “effective” in promoting contestability if it genuinely enables end user switching and/or multi-homing, and thereby incentivizes gatekeepers and business users to innovate.

This in turn requires that there are no barriers to end users making use of data portability. It seems reasonable to assume that the requirements around effectiveness will prevent gatekeepers from creating behavioral barriers to data portability, such as making end users click through excessive warning screens before porting their data. It should also prevent gatekeepers putting in place rules that restrict third party services encouraging or prompting the use of data portability.

But even if gatekeepers do nothing to inhibit take up, that may not be enough. Experience from multiple other markets tells us that enabling users to switch need not lead to them actually switching or multi-homing. In the face of inactive and cautious consumers, even more proactive stimulation may be needed. For example, despite the UK Current Account Switching Service (“CASS”) being successful in eliminating most of the difficulties that consumers faced in switching bank, consumers were insufficiently aware of this and switching rates remained stubbornly low. As a result, CASS has now been additionally required to engage in the active promotion of its services.

Looking forward, it will be interesting to see whether the Commission seeks to use the requirement of effectiveness to drive similar proactive interventions in an online context – interventions that may even go beyond what the DMS sets out “on its face.”

 

IV. DOES THE DMA GO FAR ENOUGH IN INCORPORATING BEHAVIORAL INSIGHTS?

While these various DMA provisions reflect a far better understanding of behavioral science than might have been expected from the DMA’s initial drafting, there nonetheless remain a number of residual questions and additional challenges.

First, the DMA’s emphasis on users being able to take certain actions “easily” or “without undue difficulty” is clearly helpful. If end users find it hard to take actions, then they will not do so. But how should these terms be interpreted in practice?

For example, it is required under Article 6(3) that end users should be able to able to change their default search engine easily. But there are typically multiple access points to search engines on a device. Users can go to a search app, they can go to a particular browser and use its default search engine, they can search via the voice assistant and use its default search engine, they can use text search (or “look-up”) from within another app, or they can use a search widget. Should it be presumed that being able to switch search engine “easily” means that end users should be able to switch the default setting for all of these at once? Or – arguably even better – that they should have access to a single screen where they can simply tick which access points they wish to switch?

It is also not clear that sufficient thinking has been done in relation to the different way in which end users interact with voice assistants versus screens. For the former, users are less likely to be able to deal effectively with long lists of options. It is one thing to enable a user to say “Siri, I wish to change my default browser,” it is quite another to think about how the available options can then be presented in a neutral way.

Also, what will firms be expected to do in order to demonstrate compliance with these provisions? This will presumably involve needing to show how easily users can switch default. But this raises the question of how to “audit” choice architecture.

There are established methods for testing the impact of choice architecture, such as A/B testing. A natural way of demonstrating compliance, therefore, would be for gatekeepers to share with the Commission evidence derived from such experimental work. But will this be enough? It may well be that the Commission will need to require additional targeted testing. There may also be merit to its finding a way of systematizing such testing and its reporting, so that all gatekeepers use a common framework.

Second, it is not clear that the regulations take a fully consistent or appropriate approach to the repeating of prompts. Under Article 5(2), which restricts the collection, combination, and cross-use of personal data across services without active end user consent, the CMA states specifically that where consent “has been refused or withdrawn by the end user, the gatekeeper shall not repeat its request for consent for the same purpose more than once within a period of one year.”

This wording seems to be partly motivated by concerns around “consent fatigue.” This seems sensible. However, there is no equivalent wording in Article 6(4) that would similarly limit the frequency of prompts from third parties, or allow gatekeepers to do so. As such, there is a serious risk that end users become overwhelmed by prompts from third parties seeking to become their default. This is in turn likely to generate “choice fatigue,” creating a risk either that end users either ignore the prompts, thus dampening their potential impact on contestability, or (even more worryingly) that end users actually make mistakes.

There must also be a risk of such prompts being misleading. In telecoms markets, when it was made too easy for third parties to switch consumers to their own services, we saw the emergence of “slamming” whereby consumers would find they had switched provider without fully realizing it.  This would not be a good outcome here, but the risk is not addressed by the DMA, and nor is it clear that the DMA would allow gatekeepers to step in and ameliorate it.  

Third, there can be important tensions in designing choice architecture, and it is not clear that these have been considered fully. For example, in relation to the right to termination (as addressed under Article 6(13)), the associated Recital (63) proposes that “closing an account or un-subscribing should not be made be (sic) more complicated than opening an account or subscribing to the same service.”

Whilst this would seem a desirable objective in principle, it may be difficult to achieve in all cases without creating unintended consequences. For example, when end users are setting up a new device, they value being led through the process of signing up to a series of services in a well ordered and straightforward fashion. It is not clear how it would be possible to make it as easy to unsubscribe to these services as to sign up to them without giving the end user regular prompts to consider doing so. But this could easily annoy end users and could even lead to them making mistakes as discussed previously. In practice, it is to be hoped that the Commission would accept a proportionate solution, such as the introduction of easy-to-find cancellation buttons. But this could usefully be clarified.

Likewise, the requirement under Article 5(2) not to repeat consent requests more than annually might seem sensible, but what if a user has switched off location services and then wishes to use a proprietary mapping app. Is the gatekeeper really prohibited from advising the user that they will need to switch on location services to do so?

Fourth, while the DMA is designed to open up end user choice, we would expect end users to have a tendency to choose brand names they already know, and risk averse in terms of tying out new options. This has two important implications.

First, it means that the design of the default choice screens required under Article 6(3) really matters. The precise choice architecture adopted will be critical to their success. There are many different aspects that could become relevant here, from the number of options provided and their ordering, to whether there should be brief descriptions of each option. These options will need to be tested to ensure that the choice screens have their desired impact. Another element that is almost certain to be helpful would be clear reassurance that users can easily reverse their choice later if they wish to do so.

Second, it means that opening up choice for some services could backfire, by enhancing the market position of the biggest players yet further. Take Microsoft Bing for instance. Currently this is the default search engine on Microsoft devices. Despite Bing only having around 5 percent of the EU search market, it seems plausible that Microsoft will be required to offer users an upfront choice of search engine. This in turn may well lead to a further loss of Bing’s market share to Google; presumably not the result the Commission was seeking.

 

V. CONCLUSION

Overall, the DMA already exhibits a strong understanding of the importance of choice architecture and end user behavioral biases. However, it is not clear that the DMA has in fact gone far enough in considering the implications of behavioral science. Some challenges and unanswered questions remain. How “easy” do actions need to be to satisfy the obligations, and how will compliance be demonstrated? Is there a risk of end users being overwhelmed or misled by third party prompts, and how can this be addressed? Are there unintended effects of some of the proposals around choice architecture? And is there a risk that greater end user choice could in fact embed market positions even more strongly?

It is to be hoped that many of these questions will be considered and addressed by the Commission during the process of DMA implementation. This is not a simple matter and would require serious resources and expertise. If it can be done, the DMA stands to be the most advanced regulation to date in terms of its embedding of behavioral insights. But if not, its effectiveness may be seriously compromised.


[1] Amelia Fletcher is Professor of Competition Policy at the University of East Anglia and a Non-Executive Director at the UK Competition and Markets Authority. This paper is written in her academic capacity and does not necessarily represent the views of the CMA. Amelia is grateful for useful discussions with Marc Bourreau, Jacques Crémer, Alexandre de Streel, Richard Feasey, Paul Heidhues, Jan Krämer, Giorgio Monti, Martin Peitz and Vanessa Turner, as well as at the Centre on Regulation in Europe (“CERRE”), Ofcom and Oxera.

[2] https://www.gov.uk/government/publications/online-choice-architecture-how-digital-design-can-harm-competition-and-consumers.

[3] https://www.ftc.gov/news-events/news/press-releases/2022/09/ftc-report-shows-rise-sophisticated-dark-patterns-designed-trick-trap-consumers?utm_source=govdelivery.

[4] https://ec.europa.eu/competition/elojade/isef/case_details.cfm?proc_code=1_40099.

[5] https://curia.europa.eu/juris/document/document.jsf;jsessionid=06234DFA904539A9DE7D8C3B327A585E?text=&docid=265421&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=347.

[6] https://ec.europa.eu/competition/antitrust/cases/dec_docs/39740/39740_14996_3.pdf.

[7] https://ec.europa.eu/commission/presscorner/detail/en/statement_20_2082.

[8] https://ec.europa.eu/commission/presscorner/detail/en/ip_21_2061.