The term “dark patterns” became popular and has gained much attention from both enforcers and academics. It connects strictly to behavioral studies and the relevance of choice architecture, notably in the online environment. If dark patterns entail the deployment of choice architecture in ways that misguide individuals and that may lead to harm, one relevant question in this discussion is assessing whether the mere fact that some form of manipulation is being deployed would mean the practice should be deemed unlawful. This article proposes that though the discussion on the legality of manipulation is relevant – and the definition of what is considered to be manipulative is paramount – the dark patterns debate gains more by focusing on the impact of dark patterns’ deployment for individuals.
By Marcela Mattiuzzo[1]
As noted by Thaler, Sunstein & Balz,[2] people do not make decisions in a vacuum. Rather, they decide in specific environments. Those who design the environments in which decisions are made are referred to as “choice architects” and have considerable power in influencing what those decisions will be, precisely because they are able to meddle with features of that same environment (Thaler, Sunstein & Baltz 2010).[3] As behavioral science has shown, rather than being fully rational, utility maximizing individuals, human beings are highly susceptible to all kinds of influence. Becoming aware of the susceptibilities of individuals to such influence can allow choice architects to create designs that foster specific decision-making.
Literature and research on how precisely individuals behave, and how that behavior significantly departs from what would be expected from the homo economicus, is vast and far-reaching.[4] By now, three Nobel prizes have been granted to academics that dedicated their careers to behavior studies.[5] The field of behavioral economics has grown and provided relevant information not only to economists, but also to policymakers concerned with devising better strategies and solutions in tackling the incentives for individuals to act in certain ways.
The limits of human rationality (or bounds of human behavior) are relevant not just because they allow for a better description of how individuals act, but also because – and this is of paramount relevance – research has shown that biases are predictable and have patterns (Thaler & Sunstein, 2008). In other words, it is not that behavioral science has destroyed the usefulness of economic models by concluding humans operate in entirely unpredictable ways, but rather that it has shown predictability within irrationality.
As highlighted by Akerlof & Shiller, behavioral economics is relevant not because it shows how human beings are entirely irrational and it is therefore impossible to predict their actions. On the contrary, it is relevant because it allows for better prediction of human behavior, as academics have long been able to identify patterns in irrationality[6] – for example, reasons for procrastination or decision paralysis. For that same reason, behaviorism facilitates rather than impedes economic debates that are essential in drafting norms. If individuals do not respond as rational agents that are always maximizing their own interests, but frequently fail to reach that goal for reasons that repeat themselves over time, then one can (and should) use that information to design legislation that better protects consumers and incentivizes competition.
Likewise, because irrational patterns are predictable, they open room for manipulation – and more specifically for choice architects, if they so wish, to make use of manipulative strategies. Given individuals tend to act in similar ways, and that their actions are not fully rational, one can explore the limits of rationality to steer people towards reaching certain conclusions and acting in certain ways. The goal of this article is to propose a discussion on the (ir)relevance of the concept of manipulation in defining the (un)lawfulness of the use of choice architecture, and more specifically of dark patterns, in online environments.
First it is important to clarify that the term “dark pattern” has no ultimate and final definition. In trying to provide an overview of the variety of definitions for dark patterns, Mathur et al. identified 19 instances in which the term was defined. They explain that after Harry Brignull first introduced the term in 2010 on the website darkpatterns.org, describing dark patterns as “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something,” there was a flurry of academic research that made use of the expression (p. 3).[7] Their research has revealed what they understand to be four different “facets” of dark patterns, namely: (i) characteristics of the user interface that can affect users, (ii) the mechanism of effect for influencing users, (iii) the role of the user interface designer, and (iv) the benefits and harms resulting from a user interface design (Mathur, Mayer & Kshirsagar, 2020).[8]
I will provide a more specific definition of how I believe dark patterns can be understood throughout this article, but for now it is enough to say that they are the deployment of choice architecture that influence users’ decision-making.
If one deploys choice architecture in the online environment, it is not immediately clear that such conduct should be unlawful. First, for an obvious reason: because the result can be beneficial to the user. For instance, if someone decides to design a platform in a way that allows for the user to be given more information and more accurate details about the products she is about to buy, that is likely to be good for that person. But the issue I am interested in debating regards scenarios in which negative impact to consumers do take place. In that context, it is important to thoroughly examine the idea of manipulation as a specific form of influence, to better understand what about this deployment of choice architecture would be potentially unlawful. Is the mere fact that manipulation is taking place – and users’ decision-making being influenced – the issue, or does the problem lie solely when the result of such influence is detrimental to consumers?
I. CONCEPTS OF MANIPULATION
To discuss whether manipulation is lawful, we must first debate in more depth what exactly manipulation entails. First and foremost, we should note that there is ample debate on the concept of manipulation and it is by no means straightforward to devise its precise contours.
Jongepier & Klenk contribute to this discussion by asking a question that is of particular relevance here: whether there is anything that makes online manipulation effectively different from offline manipulation (and if so, what is that). [9] They start out by clarifying that the specific characteristics of manipulation as a concept are hard to define, but also that “the study of manipulation does not stand or fall with the propensity of the concept ‘manipulation’ to bend to complete analysis in terms of necessary and sufficient conditions. Manipulation, though perhaps vague, varied, and beset with borderline cases, may yet be unified by Wittgensteinian family resemblance, that is, not a set of shared properties but a resemblance to paradigm cases of manipulation.”[10] In that light, they propose a search for “demarcating factors” that aim at distinguishing manipulation from other practices, carrying out a literature review of recent work in this field. Their conclusion is that it is important to form a theory of manipulation that has clear methodology, and to clarify one’s aim in developing such theory (Jongepier & Klenk, 2022). [11]
In that light, it is not the objective of this piece to provide a definitive answer to the question on what manipulation entails, though I do aim at providing a definition useful for the purposes of the dark patterns debate. I adhere to Susser et al.[12] understanding that manipulation is a form of influence which specifically attempts to “change the way someone would behave absent the manipulator’s interventions.” Manipulation, in that sense, is different from other practices such as persuasion or coercion, because it involves “taking hold of the controls” and to “displace [people] as the deciders.”[13] In other words, “whereas persuasion and coercion work by appealing to the target’s capacity for conscious decision-making, manipulation attempts to subvert that capacity.”[14]
It is important to note that the central aspect of this definition of manipulation emphasizes that to manipulate has nothing to do with leading a person to make non-ideal decisions, as someone can be manipulated into making better decisions. The covertness of manipulation is much more important in its definition than the goal of the manipulator. This is where dark patterns and manipulation differ. Manipulation can be employed “for good” and, in the now famous concept popularized by Thaler & Sunstein, individuals can be “nudged” toward making better decisions, even if that process involves some level of hiddenness.[15] Dark patterns, however, always, and by definition (or at least according to the definition I intend to propose herein) lead individuals to be worse-off.
II. MANIPULATION AND TECHNOLOGY
A second aspect that requires further analysis is whether manipulation is at all different when it is deployed by use of technology. Jongepier & Klenk propose that there are aggravating factors regarding technology that should be taken into consideration, namely: personalization, opacity, flow, and lack of user control.[16] Personalization, understood as “the way in which (e.g. machine learning) algorithms are designed such that they can deliver something that is in line with the user’s preferences, personality, and so on” (p. 35) has the potential to enhance the relevance and effectiveness of manipulation. The idea of opacity, though itself debatable, relates to lack of transparency. Flow, for its turn, refers to user’s seamless online experience – which though overall desirable can “prevent one from being aware of relevant knowledge, can hamper one’s opportunities to reflect, can bypass one’s rationality, and thus prevents one from gearing one’s behavior in directions that better ft one’s larger or deeper desires or ideals” (p. 39). Finally, lack of user control means there is little a user can do, even when she is aware that she is trapped inside a filter bubble, to break out.
The observations by Jongepier & Klenck should be understood in light of other authors’ contributions. Notably, the idea of market manipulation was first coined by Hanson & Kysar[17] in a famous piece from the 1990s that aimed specifically of making use of behavioral science to show how market outcomes can be influenced. In their words, “[the] susceptibility to manipulation produces an opportunity for exploitation that no profit-maximizing manufacturer can ignore.”[18] As the authors very poignantly point out, the possibility of manipulating consumers is relevant because it means firms have no other option than to capitalize on it, otherwise they will be losing precious market opportunity. That gives rise to a market failure: consumer biases are an endogenous force that shapes markets.
Calo proposed an adaptation of the concept to current terms by calling Hanson & Kysar’s proposal “nudging for profit.”[19] He further clarifies that though the idea of manipulation in markets was already relevant back in the 1990s, it became significantly more important once the mediated consumer and big data came about – roughly put, the mediated consumer is one that does not interact directly with firms that provide goods or services, rather purchases through devices, leaving a trail that can be used to firms’ benefit, precisely to design strategies aimed at manipulating behavior; the use of big data, in turn, involves “parsing very large data sets with powerful and subtle algorithms in an effort to spot patterns.”[20] Calo argues that companies can look for biases in these large data sets of consumers’ trails and adopt strategies aimed at exploiting vulnerabilities in much more effective ways than before.[21]
Another aspect that deserves a deeper dive in clarifying the relevance of technology is choice architecture – and more specifically the role of architects in shaping decision-making. The concept of choice architecture, as stated previously, has been around for some time. The deployment of this concept in digital markets, just like digital markets themselves, is more recent. It is not particularly challenging to understand that how options are presented to us makes a difference in determining what we effectively choose. But the devil is in the details and the relevance of choice architecture is ever greater the less we are able to easily identify it.
More radical illustrations on the relevance of design can be found in gambling. In Addiction by Design,[22] Schull clarifies that the enterprise that sustains gambling is based on reinforcement schedules. That means gambling machines, such as slot machines, are built in ways that hook the player based on a simple logic of providing rewards for their actions. The trick is that, though on the one hand the person knows that rewards can be awarded, she is entirely unable to predict when those rewards will be granted. Referencing the studies by Skinner, the author highlights that those schedules can be stretched by “someone who controls the odds”[23] – or as I would call it, by the choice architect. Schull also highlights that the adjustments made to game development do not simply “detect and conform to existing market preferences, [but rather] have transformative effects on those preferences.”[24]
In a similar light, Hartzog identifies the relevance of choice architecture in connection to privacy. The author very adamantly points out that, as much as we are led to believe otherwise, design is never neutral, because it always “communicates information and enables (or hinders) activities.”[25] It provides signals to people, and as such “helps define our relationships and our risk calculus when dealing with others.”[26] It also alters transaction costs, by making tasks easier or harder to accomplish. In the online environment in particular, a lot of effort tends to be spent on facilitating interaction, and it has been proven time and again that slight increases in cost can have relevant impacts.[27] With that background, he highlights that the problem with design and privacy lies primarily in market incentives – there are few that lead companies to invest in less data collection, and the more data collected, the more users are subject to potential harm. He further proposes that adequate regulation should focus on design itself, because “the design of popular technologies is critical to privacy, and the law should take it more seriously.”[28]
The more important point here is that though the general idea behind choice architecture remains the same – online environments, just like any other environment, must be designed somehow; items have to be displayed in some order, colors have to be chosen for each segment of a page, and so on, and, just like it happens offline, how such choices are framed can be better or worse for users. The complexity and the importance of this debate is larger because online environments are much easier (and cheaper) to design and to experiment on. Designers can deploy several A/B tests in online platforms that they would be unable to run offline. The level of granularity of design options therefore increases. It is not only a matter of choosing if product 1 or product 2 will be placed first, but also a matter of what color will most engage users, what choice of words will be more appealing, what order of placement will provide better results, and infinite other options.
Looking at choice architecture through the lens of behavioral economics allows us to see how they intertwine, and how design can be used, with the help of behavioral biases, to negatively impact both users and markets. As Akerlof & Shiller point out, we must be aware that economic agents will always take advantage of situations in which they can turn higher profits. If they identify behavioral biases that would allow for business opportunity, they will explore such biases. The authors further clarify how this has been done time and time again, in situations as different as the 2008 financial crisis and the pharmaceutical industry.[29]
There is no reason to believe this will be any different in digital markets – in fact there is ample evidence that the same will likely happen to a worse degree. Studies have shown how platforms can deploy choice architecture in ways that may harm either users, markets, or both – notably, the reports on the topic by the UK Competition and Markets Authority (“CMA”),[30] the Organization for Economic Cooperation and Development (“OECD”), and the European Commission (“EC”) compile evidence that classifies different methods by which such results may be reached. [31] The OECD also provides some potential explanations on why the deployment of deceptive practices in online environments tend to be more damaging to consumers. They claim that businesses are more aware of opportunities for exploiting behavioral biases, but also that consumers’ behavior online is significantly different. They are less attentive, process information less well, more frequently default to simple rules of thumb, and in general are more task-oriented – which consequently allows them to ignore content more easily, as well as underestimate manipulation.[32]
III. THE (UN)LAWFULNESS OF MANIPULATION AND DARK PATTERNS
By adhering to a definition of manipulation that requires a subversion of an individual’s capacity to understand what is going on, I suggest that manipulation necessarily involves diminishing people’s capacity for rational deliberation. As stated in the previous section, there is reason to believe that the potential to do so in online environments is heightened. The question that can be further discussed, in this context, is whether manipulation is itself “bad” or, put differently, if individuals who are subject to manipulation have any normative reasons to object to its deployment, even if no concrete decisions that they may have taken while being manipulated result in harm or any unfavorable results.
In attempting to tackle that question, Sunstein notes that manipulation can be considered a moral wrong under both Kantian and welfarist approaches. For Kantians, it is wrong because “it is not respectful of choosers”[33] (p. 1960), offending their autonomy. For welfarists, the risk of manipulation is that it can promote the manipulator’s own interest, “rather than those of the chooser” (p. 1961). Furthermore, even when manipulators are acting in the chooser’s best interest, they often lack the knowledge of what is best for each chooser, and the results can be equally problematic.[34]
Sunstein also states that though we should be able to agree, on different grounds, that there is a certain category of actions that can be classified as manipulation and that can be harmful to individuals, it might as well be that this category is “properly promoted or discouraged by social norms, but properly unaccompanied by law or regulation.”[35] In other words, manipulation may be wrong, but not necessarily illegal. For this reason, he proposes that the best way to counter manipulation is to focus on specific forms of manipulative behavior that are clearly harmful and hard to defend. He suggests that assessing transparency – to what extent people aware of what is going on and of what they are being led to do – and the general goal of the practice vis-à-vis the interest of most people subject to it, would be a way forward.[36] Other authors follow similar paths and argue, for example, that the unlawfulness of manipulation should be assessed based on what the manipulator is trying to accomplish.[37]
Instead of trying to provide a general account on how manipulation can be illegal, I will attempt to answer the question on whether manipulation is lawful within the narrow terms of my definition of the concept, as well as within the purposes of the “dark patterns” discussion. To do so, a clearer definition of dark patterns is a helpful step forward. In that sense, I propose that dark patterns must (i) encompass the deployment of choice architecture in the online environment (ii) that manipulates individuals (iii) into achieving a result that is beneficial to the choice architect (iv) and detrimental to the user. In behavioral lingo, dark patterns work by exploring System 1 decision-making while eliminating (or substantially minimizing) System 2 processes.[38]
In that context, though I believe arguing manipulation is unlawful is viable, I also understand it is not possible to say all forms of manipulation are illegal – for, as mentioned, manipulation can be employed in the manipulee’s best interest. Though one could say that the mere subversion of rational capacity for deliberation is a moral wrong, arguing it is legally impermissible is quite different and, in the present context, a burdensome effort that provides minimal practical impact. Given my concept of dark patterns already entails a detrimental result to users, a more functional approach suggests focusing on those impacts instead of devising a theory on the rightfulness of manipulation. Again, that is not to say this is not relevant, nor that it cannot be done, but simply to highlight that a debate on dark patterns need not be constrained to that discussion.
Note that the proposed definition leaves aside yet another aspect that is often part of the debate, that is, whether dark patterns need to encompass intent in order to effectively be considered “dark.”[39] Devising intent is extremely challenging and, more importantly, often extremely hard to assess, especially when dealing with corporations instead of individuals. And it is precisely because the discussion I aim to carry out is focused on institutions that I suggest leaving aside the debate on whether the goal of the company was indeed to impair individuals’ deliberative capacities. Most legislation that deals with corporate conduct understands that whether or not the goal of the company was to reach a given result is relevant in determining sanctions or damage liability, but not central in verifying if the practice was illicit and/or should be penalized. Another reason for leaving that discussion aside is that, as clarified, choice architecture is not accidental or neutral. As such, the way any environment is designed will invariably tend to serve its architects’ purposes. Even if the person (or company) in charge did not necessarily anticipate the negative consequences of their choices, the more likely scenario is that the choices themselves are not random. Therefore, it makes sense to assume, at least at first sight, that intent is not an aspect that should be assessed in much detail to establish liability in this context.
If the legality of dark patterns should be assessed not owing to how users were influenced into reaching certain decisions, but rather by focusing on whether those decisions are detrimental or harmful, the focus of the dark patterns debate naturally shifts towards specific practices and their impacts. Lawfulness will be determined by the result of a given conduct, and not by the wrongfulness of the conduct itself.
As I understand it, this approach is significantly simpler and only marginally less useful in terms of policy debates. Again, that is not to say that discussing the legality of manipulation is not relevant, but merely that current research indicates that because this is not a well-defined and uncontroversial concept, assessing whether its deployment is somehow unlawful is not clear-cut and will be context-dependent. In that sense, focusing on effects is a useful shortcut. It serves to show that if manipulation is deployed by use of choice architecture in online environments and the result of that interaction is positive for the company while consumers are negatively impacted, then there is room to deepen the assessment on the lawfulness of the conduct.[40]
[1] PhD Candidate at the University of São Paulo, Visiting Fellow at the Information Society Project at Yale University. Partner in competition law and data protection at VMCA.
[2] Thaler, Sunstein & Balz, Choice Architecture (SSRN Electronic Journal, 2010).
[3] Ibid. 4.
[4] Richard H. Thaler, From Homo Economicus to Homo Sapiens (Journal of Economic Perspectives, Volume 14, Number 1, 2000).
[5] Herbert Simon was awarded the Nobel Prize in Economic Sciences in 1978, followed by Daniel Kahneman and Vernon Smith in 2002, and more recently by Richard H. Thaler, in 2017.
[6] Akerlof & Shiller, Phishing for phools: The economics of manipulation and deception. (Princeton University Press, 2015).
[7] Marthur, Mayer & Kshirsagar, What Makes a Dark Pattern… Dark?: Design Attributes, Normative Considerations, and Measurement Methods. (Proceedings of the 2021 CHI conference on human factors in computing systems, 2021), 3.
[8] Ibid.
[9] Jongepier, Fleur & Klenk, M. B. O. T. The Philosophy of Online Manipulation. (Routledge – Taylor & Francis Group, 2022).
[10] Ibid. 17.
[11] Ibid. 19.
[12] Susser, Roessler & Nissenbaum. Online Manipulation: Hidden Influences in a Digital World. (GEORGETOWN LAW TECHNOLOGY REVIEW, 2019).
[13] Ibid. 16.
[14] Ibid. 17.
[15] A nudge, as Thaler & Sunstein describe, “is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not.” Thaler & Sunstein. Nudge: Improving Decisions About Health, Wealth, and Happiness (Yale University Press, 2008), 6.
[16] Ibid. 35.
[17] Hanson & Kysar, Taking Behavioralism Seriously: The Problem of Market Manipulation (New York University Law Review, Vol. 74,1999), 632.
[18] Ibid. 722.
[19] Ryan Calo, Digital Market Manipulation, (82 George Washington Law Review 995, 2014), 1001.
[20] Ibid. 1008.
[21] Ibid. 1008.
[22] Natasha Dow Schüll, Addiction by Design: Machine Gambling in Las Vegas, (Princeton University Press, 2012).
[23] Burrhus Frederic Skinner, Beyond Freedom and Dignity, (Pelican Books, 1971), 40.
[24] Ibid. 111.
[25] Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, (Harvard University Press, 2018), 26.
[26] Ibid. 27.
[27] Ibid. 29.
[28] Ibid. 7.
[29] Ibid. 38.
[30] See https://www.gov.uk/government/publications/online-choice-architecture-how-digital-design-can-harm-competition-and-consumers.
[31] See Behavioural study on unfair commercial practices in the digital environment – Publications Office of the EU (europa.eu), available at https://op.europa.eu/en/publication-detail/-/publication/606365bc-d58b-11ec-a95f-01aa75ed71a1/language-en.
[32] OECD,. Dark commercial patterns. (OECD Digital Economy Papers, No. 336, 2022).
[33] Cass R. Sunstein, Manipulation as theft. (Journal of European Public Policy, 29:12, 1959-1969, 2022), 8.
[34] Sunstein further argues that the welfarist argument is largely based on John Stuart Mill’s harm principle.
[35] Ibid. 1963.
[36] Ibid. 1964.
[37] For example, Eric Posner argues that the end of manipulation “is typically one’s own advantage, but it need not be. Parents frequently manipulate their children for the children’s interest, and not for (or not just for) the parents’.” Eric A. Posner, The Law, Economics, and Psychology of Manipulation. (Coase-Sandor Working Paper Series in Law and Economics No. 726, 2015), 2. .
[38] According to Daniel Kahneman, on Thinking, Fast and Slow, “The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps.”
[39] There are two ways intentionality can be understood in this context, according to Jongepier & Klenk: the general intentionality requirement speaks to the requirement that manipulators be agents. The specific intentionality requirement, for its turn, requires “intentions with a particular content” (p. 22). I am here focused on the specific requirement, by which one would need to assess the particular goals of that concrete action.
[40] The specific requirements for legality will then vary depending on what kind of wrongdoing one is interested in assessing. For antitrust, conduct would fall within the rule of reason analysis, and aspects such as market power would have to be investigated. For data protection, issues such as transparency and users’ consent might be the most pressing. And so on.