Online choice architecture (“OCA”) encompasses the set of design features that impact choice in digital environments. From default settings and notifications to personalization and recommender systems, OCA features are present in almost every interaction with technology. Existing evidence on the effects of OCA on human behavior have often been one-sided, focusing either on positive or negative outcomes. In online settings, the effect of OCA practices on consumer welfare is often complicated. In this paper, we describe the design process and practices of OCA, analyze applications of OCA for good and for bad, and discuss future direction for research and practice of OCA design. We recommend that designers and researchers measure and capture a wider range of outcomes, beyond user engagement and satisfaction. We also highlight the interplay between data, algorithms and OCA design since many OCA practices are embedded in the design of interfaces and are often data-driven. Therefore, advancing good and preventing bad OCA design might require an approach that goes beyond the individual user or designer, and looks at structural changes across the market. 

By Michael Sobolev & Vedran Lesic[1]

 

I. INTRODUCTION

Digital technology is constantly transforming how people make decisions in their daily lives. The arrival of the iPhone had one of the biggest effects on human daily behavior. As of now, 85 percent of U.S. adults own a smartphone[2] and increasingly rely on these devices for communication, entertainment, and shopping. E-commerce is another transformative area as people increasingly rely on websites and mobile apps for searching and buying almost any type of product or service. As evidence, nearly half of the U.S. population are currently paying for Amazon’s premium subscription (i.e. Amazon Prime).[3] Due to the high demand, it would be reasonable to expect that these technologies improve the lives of people and make their choices easier, better, and more informed than ever before.

Choices are not happening in a vacuum. Almost every decision online is influenced by design of the user interface (“UI”) and user experience (“UX”). Big tech companies employ large teams of UX designers and researchers to test and optimize these types of features for user engagement (e.g. Google, Amazon). Even seemingly small changes in the design can matter. At Microsoft, A/B testing of news headlines revealed that slightly changing the shade of colors can lead to revenue increase of $10 million annually.[4] Other design decisions can be more deliberate. Online shopping websites simplify buying processes to ensure the quickest possible conversion for each consumer with Amazon’s one-click checkout as one of the best examples. Ranking, reviews, and recommendations are only a partial list of interface design aimed at influencing choices in predictable ways. In behavioral economics, we call these sets of design features choice architecture.[5]

Choice architecture broadly refers to the way choices are set up and the context in which people make decisions.[6] Common tools of choice architecture[7] include setting up a default option, ranking of products and framing information.[8] For example, in a traditional ‘brick and mortar’ environment, the way options are arranged, what is displayed more prominently and how consumers are interacting with other shoppers and staff, will affect what they might purchase that day. In an online context, choice architecture is the environment in which users or consumers make decisions, including the display and arrangement of choices and the design of interfaces. We call this set of digital design features Online Choice Architecture (“OCA”). OCA is a neutral term and depending on how the choice architect (also called designer)[9] applies it will determine the direction of its impact. 

In this paper, we describe OCA design, analyze applications of OCA for good and for bad, and discuss future direction for research and practice of OCA. We build on multidisciplinary research ranging from behavioral economics to human-computer interaction. Since the publication of the book Nudge,[10] academic research and behavioral science practitioners have generated a large number of studies, randomized controlled trials, and papers on the positive effects of offline and online choice architecture and nudges. In online setting, research on effectiveness, prevalence, and negative effects of OCA (and closely linked dark patterns)[11] is getting more and more attention, especially in consumer organization and competition authorities, and the media. Our aim is to further advance the discussion on the complicated effects of OCA on human behavior and discuss implications for design.

 

II. ONLINE CHOICE ARCHITECTURE

Choice architecture is everywhere, and it is unavoidable. Elements of choice architecture are integrated in every product or service people use as part of daily life. E-commerce websites include product reference pricing, online reviews and ranking of products among other OCA practices. Mobile devices and apps send people what seems like an unlimited number of notifications daily. Social media includes automatically generated feeds of posts and news, setting almost no limit on the amount of content people can consume. OCA practices evolved from understanding of human behavior and are designed to influence behavior in a predictable way, and as such, can be equally used for good or bad. What is the impact of OCA practices on consumer and user welfare? The answer is often complicated. In Table 1, we provide a few examples of good and bad applications of OCA, which we discuss later in the paper.

Table 1: Examples of Good and Bad Online Choice Architecture

Choice Architecture

Good example

Bad example

Defaults

Setting preferred information to reduce friction in online shopping (i.e. one-click)

Default option for the least amount of privacy

Prompts and Reminders

Reminders to pay bills in time and digital calendars

Nagging push notifications to increase engagement with product

Ranking and Recommendations

Ranking products based on explicit user preferences

Paid or promoted ranking that ignores product quality and user preferences

Note: This table does not provide an exhaustive set of OCA categories. The examples in the table are just illustrative examples and each application of OCA has to be analyzed in detail for positive and negative effects on behavior.

In the digital environment, the products and services people buy or download are full of pre-specified defaults. For example, Apple’s iPhone devices and Microsoft’s Windows operating system come with a set of pre-installed apps, often developed by the same entity. One example of digital default that impacted global markets[12] is the case of Google search engine. Due to the pre-selected default, most users may not even be aware of the option to change search engines, thereby limiting their autonomy. However, when prompted with an active choice screen, an overwhelming majority of users still stick with Google as their search engine[13]. Depending on the perspective, the overall effect on consumer and user welfare (and market competition) could be either positive or negative. As a result, this case was a subject of investigation by different competition and consumer authorities around the world.[14] In other cases, strong preferences might be able to override defaults. For example, despite the efforts of Microsoft to establish their default internet browser on all Windows devices,[15] the majority of users deviate from default and actively download and use an alternative browser, as evident by market share of the Chrome browser.[16] A similar case involves the conscious effort by iPhone users to download and use Google maps app as opposed to the default Apple maps app that is pre-installed on all Apple devices. In online settings, defaults can be challenging to design, because often it might be hard to find a default option that works for everyone and forcing users to actively choose between two or more options can lead to undesired friction.

Another interesting example of OCA is Amazon’s invention of the one-click ordering button. This feature allows users to set default shipping and payment information that can be used in every future purchase. Amazon patented this idea in 1999 and the recent expiration of patent allowed other payment platforms to adapt similar technology.[17] The wide adoption among shopping platforms suggests a benefit for businesses and also demand from their users to allow this feature. For users, one-click shopping significantly reduces friction which, in turn, leads to better conversion rates for businesses. Yet, in the online setting, when users make fast decisions, reduction in friction can also lead users to consume more and buy products they do not really need.[18] In those cases, adding friction as part of the OCA might actually help users pause and reflect on their decisions and reduce the negative side effect of seamless online shopping experience. This might be especially true in online banking, where introducing friction by increasing the number of decision points before a certain financial transaction, has proven beneficial for consumers.[19]

The examples above emphasize the susceptibility of users to OCA practices. Due to the adoption of digital technology, the online setting is bringing a new set of features that create opportunities for designers of choice architecture. Design of good OCA can provide substantial benefits for users. These benefits include more seamless user experience, easier comparisons between products and greater transparency. To further tailor products and services, designers leverage user preferences and behavior to personalize every step of user experience. In some digital environments, users also have the ability to customize the product or service they use for maximum utility. Unfortunately, the move to digital environments also opened the door for bad design of OCA and negative effects on human behavior. Users online have the tendency to have shorter attention spans and trust information provided by others (e.g. online reviews).[20] Bad design of OCA, by setting a problematic privacy default for example, may pose a substantial risk for consumer and user welfare.

What is the process of OCA design? As a first step, designers can leverage existing frameworks that build on research in behavioral science to design an effective choice architecture (for example MINDSPACE[21] or EAST).[22] The EAST framework, for example, urges designers to apply behavioral insights by making behavior easy, attractive, social, and timely. The second step involves optimization. Optimization often involves iterative design based on user feedback and data-driven A/B testing. A/B testing detects the behavior of real users accessing different versions of a website or an app to identify the most effective version.[23] Recently, A/B testing is becoming more popular across various platforms and websites, with some conducting more than a thousand A/B tests every single day. For example, New York Times A/B tests which headline creates the most engagement and Netflix uses the same approach to personalize the thumbnail of shows for each individual user.[24] In fact, optimization of OCA usually never stops with A/B tests taking a crucial part in the continuous evaluation of digital products and services.

 

III. OCA FOR GOOD

As a general assumption, technology is invented to solve problems and improve human life. Some technologies directly target choice behavior, with invention and widespread adoption of GPS as the best example to date. If they even remember a life without it, most individuals would agree that GPS made life much easier and better, by reducing the cognitive load of navigation while driving. Choice engines, like Expedia, which allowed consumers to quickly search and book flights and hotels, are another example of a digital environment which transformed markets.[25] Modern digital platforms were originally designed with a similar purpose. Google was designed to streamline access to information, Amazon was designed to expand access and alternatives for shopping, and UBER was designed to reduce friction in transportation. Again, as in the case of GPS, most people would agree that access to Google, Amazon, Expedia, and UBER made their life better. This promise of such technologies for improving choice behavior was one of the main premises of the book Nudge[26] and the foundation for the idea of choice architecture.

Digital environment expands the set of features that designers can easily control as part of OCA. As discussed earlier, UI/UX designers play a major role in creating OCA as part of interface design. Good principles of UI/UX design are often analogous to good design of OCA, but not always. For example, good principles of interaction design[27] recommend to: (1) present feedback to the user as quickly as possible (prompts and reminders), (2) show a clear way to exit the current interaction (e.g. cancel button), (3) help reduce user mistakes by providing helpful constraints and good defaults, (4) prioritize the content and features to support primary goals (simplification and reducing friction), (5) allow users to make selections about how they want the product to work. These principles of UI were formulated to allow users get the most utility from products, even when they are thinking and acting fast online.[28] As discussed earlier, there could be a tension between seamless and frictionless UI design and consumer welfare, as in the case of one-click shopping and mobile banking.

One of the most common practices of good OCA is personalization – tailoring of a service or a product to accommodate a specific individual. By using data shared by users, personalization aims to tailor each step of an interaction with a product or a service, often increasing user engagement and satisfaction. It is not surprising to observe nowadays personalization techniques implemented in nearly every digital product, and across almost every business sector. A more intelligent way of personalization involves recommender systems, an example of technology designed to simplify choice by learning from user preferences and past behavior. At the simplest level, recommender systems allow personalized ranking of options, thereby reducing search cost and choice overload, and helping people easily choose which movie to watch, what news article to read, and what song to listen to. These types of technologies were cited as a major contributor to the success of companies like Amazon, Spotify, and Netflix.[29]

An additional trend, powered by digital technologies, is users’ ability and motivation to use OCA features to support behavior in their daily life. In behavioral science, these types of actions are called self-nudging.[30] One of the most classical and common tools of the “self-nudger” is setting reminders to deal with inertia, procrastination, and forgetfulness. Examples include reminders to pay bills on time, take medications daily, etc. Digital calendars and reminder apps further facilitate self-nudging, by allowing users to set even more timely reminders.[31] Self-nudging can also help people overcome the addictive design of smartphones and social media, for example by setting limits on app usage or manipulating the interface to be less attractive.[32] Unfortunately, as in the case of social media, efforts by users to deal with highly engaging OCA design might not be sufficient to prevent negative effects.

 

IV. OCA FOR BAD

Digital environments may amplify the potential benefits of choice architecture for users but can also amplify the potential harms. UK’s Competition and Market Authority (“CMA”) recent publication on OCA is the most comprehensive guide on potentially harmful OCA practices.[33] According to the report, bad OCA design can directly harm consumers by distorting their choices.[34] Consumers might overspend, choose an inferior option, or feel pressured to buy unwanted products. These suboptimal choices can be attributed to bad OCA design such as default options that would offer the least amount of privacy for users, excessive use of prompts and reminders (e.g. nagging push notifications), and adding unjustified friction (also called sludge)[35] to make the cancellation of service harder to initiate and complete. The example of default privacy settings for mobile apps and social media gained a particular attention in the media, inspired extensive academic research,[36] and led to adoption of new policy and regulations. As discussed earlier, finding the default that works for everyone or personalizing selection of default options for each individual are hard.[37] As evident from the privacy default example, setting a default that would harm most users while benefiting businesses, could be much easier.

Recent research in human-computer interaction measured the prevalence of bad OCA practices and dark patterns using a variety of methods. Presentation of information can be easily manipulated (or framed) to nudge a specific choice. Using automatic text-analysis, a study of more than 11,000 popular shopping websites detected dark patterns in more than 11 percent of those sites.[38] The three most common practices of bad OCA design in those sites were presenting information about scarcity (e.g. “limited quantities are available”), urgency (e.g. “discount will expire soon”), and social proof (e.g. “many people already purchased this item). Mobile apps are also often designed with bad OCA practices. A study of Google Play Store apps within the first 10 minutes of usage discovered that 95 percent of them contain at least one or more dark patterns.[39] These mobile apps used the design of ranking, defaults, and prompts to influence choice, potentially against the intent of users. A comparison between three different modalities (e.g. mobile app, mobile browser, and web browser) to detect the variations between different OCA practices found that while services can employ some dark patterns equally across modalities, many dark patterns vary between platforms.[40] This work highlights the scale and direction when it comes to looking for bad OCA design.

Detecting bad OCA practices is a complicated task. Many bad OCA practices are bundled together or presented for each user differently. Furthermore, some users may be more susceptible to OCA practices and vulnerable to harm due to personal characteristics (such as age, health, or wealth) or being in specific contexts (such under time pressure or great distress due to some major life events). Even good OCA design, that works for the majority of users, most likely would not be able to address issues for their most vulnerable users. Similarly, we expect that bad OCA design would harm the most vulnerable users even more.

 

V. DISCUSSION AND FUTURE DIRECTIONS

The positive and negative effects of OCA on human behavior are often complicated and missed. As we are unpacking the unavoidable impact of OCA in the digital environment, there is a growing awareness of the prevalence of the positive and negative aspects of OCA. Even good design can have unexpected side effects, and if these side effects are not measured, they will be overlooked. Consider the example of user engagement. If designers consider only one outcome in process of A/B testing, such as conversation rates, they might ignore the effect of the number of people leaving the page and inadvertently create dark patterns.[41] For some elements of choice architecture, such as recommender systems, it could be difficult to untangle the positive and the negative due to the tradeoff between “good” personalization of content and “bad” engagement (e.g. Facebook’s newsfeed). A more comprehensive measurement of user outcomes is needed to understand the effects of OCA practices and inform future design.

Data drives the design of OCA. As discussed in this paper, the process of optimizing OCA involves collecting data on user preferences and behavior. A/B testing would not be possible without the ability to measure user behavior directly with digital data. The move to digital choice environments enabled collection of a massive amount of digital traces of human behavior. Amazon has exponentially more data points on each consumer than a typical brick and mortar store. For good or bad, this shift enables more personalized, adaptive, and autonomous design of choice architecture[42] and digital nudging.[43] The use of data in choice architecture design can also bring direct conflict for users, such as the tradeoff between sharing personal data by users and the ability to personalize recommendations and advertising by platforms. The interplay between data, algorithms and OCA design will play a crucial role in future research and policy discussions.

Everyone could be a choice architect. Whether the design of OCA is intentional or unintentional, it will still have an impact on human decision-making. Choice architects are not necessarily only behavioral scientists and UX/UI designers, but they rather cover a wide range of professionals who participate in the design process. Anyone who is actively thinking about users, marketing, product, and prices participate and influence the eventual design of OCA. In fact, just by setting goals and targets, senior executives and managers also play a crucial role in the process. Because OCA practices are often placed on a particular spectrum (e.g. amount of friction), addressing OCA in a meaningful way would require unraveling which direction on a given spectrum the choice architect needs to move towards. For instance, in a shopping context, should websites add more friction and make users reflect before they buy an item or remove friction and help users make quicker decisions and risk buyers’ remorse?  This would mean having a bigger picture discussion among all the different types of choice architects within the businesses and other stakeholders to ensure benefits are fully utilized as well as harms are prevented.

User awareness cannot solve the problems of OCA. OCA is often well-embedded and subtle in the digital design of user interfaces, which means that users might not be aware that they are being nudged at all.[44] Even if users were to be informed that they are being nudged, the effectiveness of OCA practices may not be diminished. For example, users who received a verbal disclosure about the presence of a nudge (e.g. defaults and framing) did not impact their decision-making but rather made them believe that others were more influenced by the OCA than themselves (e.g. overconfident in their judgment).[45] Furthermore, there is evidence that proactive transparency may actually increase the effectiveness of OCA by decreasing users’ perception of being deceived.[46] This poses a challenge that users might not be best equipped to protect themselves from the harmful OCA, thereby requiring a different approach for remedies. 

OCA design can have market implications. Going forward, OCA might not only bring good and bad to the users, it might also impact the businesses and their competition, as well as the digital markets overall. For example, UK’s Competition and Market Authority (CMA) report[47] outlined that harmful OCA practices can weaken or distort competitive pressures. For example, businesses might start competing on less beneficial features of the product, such as salience, instead of actual quality and price. This might lead to less investment into innovation that would not benefit users in the long-term. Businesses may also use OCA to maintain, leverage and exploit market power by making it harder to leave their digital ecosystems, nudging consumers to use their own products. Therefore, devising policy and remedies for harmful OCA design, might require an approach that goes beyond OCA at the user level and looks at structural changes across the market.


[1] Cornell Tech, NY, U.S. and Competition and Markets Authority, UK, respectively. These ideas were formulated while the author was teaching Behavioral Economics for Tech (BEtech) course at Cornell Tech and during his time as visiting fellow at the Digital Life Initiative (DLI). The opinions expressed in this paper are those of the author and do not reflect the opinions of CMA.

[2] Pew Research Center (2022) Mobile fact sheet, Report, Pew Research Center, Washington, DC.

[3] Amazon’s 2022 Letter to Shareholders: https://www.aboutamazon.com/news/company-news/2020-letter-to-shareholders.

[4] Kohavi, R. & Thomke, S. (2017). The surprising power of online experiments. Harvard business review, 95(5), 74-82.

[5] The term choice architecture was coined in the book Nudge. See for more details: Thaler, R. H., & Sunstein, C. R. (2009). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

[6] Johnson, E. J. (2022). The Elements of Choice: Why the Way We Decide Matters. Simon and Schuster.

[7] Johnson, E. J., Shu, S. B., Dellaert, B. G., Fox, C., Goldstein, D. G., Häubl, G. & Weber, E. U. (2012). Beyond nudges: Tools of a choice architecture. Marketing letters, 23(2), 487-504.

[8] Szaszi, B., Palinkas, A., Palfi, B., Szollosi, A. & Aczel, B. (2018). A Systematic Scoping Review of the Choice Architecture Movement: Toward Understanding When and Why Nudges Work. Journal of Behavioral Decision Making, 31(3), 355–366. https://doi.org/10.1002/bdm.2035.

[9] In the remainder of the paper we use “choice architect” and “designer” mostly interchangeability, unless noted otherwise.

[10] Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

[11] This paper uses a broad definition of OCA to enable discussion of a wider set of practices across a range of contexts, including dark patterns, dark nudges and sludge. Dark patterns often come as a combination of multiple OCA practices (e.g. cofirmshaming is a combination between framing, defaults and visual manipulation).

[12] CMA. (2020). Online Platforms and Digital Advertising Market Study. Appendix H: Default Positions in Search. Retrieved from: https://assets.publishing.service.gov.uk/media/5fe4956ad3bf7f089e48deca/Appendix_H_-_search_defaults_v.6_WEB.pdf.

[13] Decarolis, F., Li, M., & Paternollo, F. (2022) Competition and Defaults in Online Search. Working Paper.

[14] CMA. (2020). Online Platforms and Digital Advertising Market Study. Appendix X: assessment of pro-competition interventions to enable consumer choice over personalised advertising. Retrieved from: https://assets.publishing.service.gov.uk/media/5fe36a658fa8f56af0ac66f2/Appendix_X__-__assessment_of_pro-competition_interventions_to_enable_consumer_choice_over_personalised_advertising_1.7.20.pdf.

 European Commission (EC). (2003). CASE AT.40099 Google Android. Retrieved from: https://ec.europa.eu/competition/antitrust/cases/dec_docs/40099/40099_9993_3.pdf.

[15] United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001).

[16] Statista: Global market share held by leading desktop internet browsers from January 2015 to August 2022 https://www.statista.com/statistics/544400/market-share-of-internet-browsers-desktop/.

[17] Wells, J. R., Danskin, G. & Ellsworth, G. (2018). Amazon. com, 2018. Harvard Business School Case Study, (716-402).

[18] Paay, J. & Rogers, Y. (2019). The Dark Side of Interaction Design. Proceedings of the 31st Australian Conference on Human-Computer-Interaction, 2–2. https://doi.org/10.1145/3369457.3369547.

[19] Pausing, reading, and reflecting: decision points in high-risk investment consumer journeys https://www.fca.org.uk/publication/research/decision-points-consumer-journeys.pdf.

[20] Benartzi, S. & Lehrer, J. (2015). The Smarter Screen: What Your Business Can Learn from the Way Consumers Think Online. Hachette UK.

[21] Dolan, P., Hallsworth, M., Halpern, D., King, D., Metcalfe, R. & Vlaev, I. (2012). Influencing behaviour: The mindspace way. Journal of economic psychology, 33(1), 264-277.

[22] EAST: Four Simple Ways to Apply Behavioural Insights: https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/.

[23] Kohavi, R. & Thomke, S. (2017). The surprising power of online experiments. Harvard business review, 95(5), 74-82.

[24] Gomez-Uribe, C. A., & Hunt, N. (2015). The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1-19.

[25] Thaler, R. H., & Tucker, W. (2013). Smarter information, smarter consumers. Harvard Business Review, 91(1), 44-54.

[26] Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

[27] For more details check Nielsen’s 10 Usability Heuristics for User Interface Design: https://www.nngroup.com/articles/ten-usability-heuristics/.

[28] Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

[29] Gomez-Uribe, C. A. & Hunt, N. (2015). The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1-19.

[30] Reijula, S. & Hertwig, R. (2022). Self-nudging and the citizen choice architect. Behavioural Public Policy, 6(1), 119-149.

[31] Sobolev, M. (2021). Digital nudging: using technology to nudge for good. Available at SSRN 3889831. http://dx.doi.org/10.2139/ssrn.3889831.

[32] Zimmermann, L. & Sobolev, M. Digital Nudges for Screen Time Reduction: A Randomized Control Trial with Performance and Wellbeing Outcomes. (2020) https://doi.org/10.31234/osf.io/nmgdz.

[33] CMA (2022). Evidence Review of Online Choice Architecture and Consumer and Competition Harm. Retrieved from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1069423/OCA_Evidence_Review_Paper_14.4.22.pdf.

[34] CMA (2022) Online Choice Architecture – How digital design can harm competition and consumers. Retrieved from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1066524/Online_choice_architecture_discussion_paper.pdf.

[35] Thaler, R. H. (2018). Nudge, not sludge. Science, 361(6401), 431-431.

[36] Acquisti, A., Brandimarte, L. & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509-514.

[37] Mills, S. (2022). Personalized nudging. Behavioural Public Policy, 6(1), 150-159.

[38] Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M. & Narayanan, A. (2019). Dark Patterns at Scale: Findings from a Crawl of 11k Shopping Websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–32. https://doi.org/10.1145/3359183.

[39] di Geronimo, L., Braz, L., Fregnan, E., Palomba, F. & Bacchelli, A. (2020, April 21). UI Dark Patterns and Where to Find Them. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376600

[40] Gunawan, J., Pradeep, A., Choffnes, D., Hartzog, W. & Wilson, C. (2021). A Comparative Study of Dark Patterns Across Web and Mobile Modalities. 216 Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–29. https://doi.org/10.1145/3479521.

[41] Narayanan, A., Mathur, A., Chetty, M. & Kshirsagar, M. (2020). Dark Patterns: Past, Present, and Future: The evolution of tricky user interfaces. Queue, 18(2), 67-92.

[42] Mills, S. & Sætra, H. S. (2022). The autonomous choice architect. AI & SOCIETY, 1-13.

[43] Sobolev, M. (2021). Digital nudging: using technology to nudge for good. Available at SSRN 3889831. http://dx.doi.org/10.2139/ssrn.3889831.

[44] di Geronimo, L., Braz, L., Fregnan, E., Palomba, F. & Bacchelli, A. (2020, April 21). UI Dark Patterns and Where to Find Them. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376600

[45] Bang, H. M., Shu, S. B. & Weber, E. U. (2020). The role of perceived effectiveness on the acceptability of choice architecture. Behavioural Public Policy, 4(1), 50-70.

[46] Paunov, Y., Wänke, M. & Vogel, T. (2019). Transparency effects on policy compliance: disclosing how defaults work can enhance their effectiveness. Behavioural Public Policy, 3(02), 187–208. https://doi.org/10.1017/bpp.2018.40.

[47] CMA (2022) Online Choice Architecture – How digital design can harm competition and consumers. Retrieved from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1066524/Online_choice_architecture_discussion_paper.pdf.