While most regulatory scrutiny of the big tech sector is couched in terms of competition or lack thereof, behavioral economics may provide rationales outside that framework.  Behavioral economics is generally problematic as a policy guide, as it undercuts the basis for benefit cost analysis and invites policy makers to substitute their preferences for those of the public they presumably serve.  However, it suggests some potential rationales based on thinking being costly and weakness of will.  Beyond behavioral economics, the psychology of preference formation could motivate policy — consider public education — but its application to big tech is amorphous.  The potentially most severe concern, that a tiny but violent minority enabled by big tech to organize destructive actions, likely lies outside both behavioral economics and the ability of any regulator or legislature to prevent.

By Timothy Brennan[1]

 

Across the globe, many see the “big tech” sector of the economy as a bad actor. Much of that criticism is expressed in terms of insufficient competition in platform markets allegedly dominated by a handful of familiar names — Amazon, Google, Facebook, and Apple. In some of these sectors, one-sided network externalities — people want to use a common service — can lead to most users signing up for the same service, Facebook being a leading example. In others, multi-sided externalities, where for example buyers want to be where most sellers are and sellers want to be where most buyers are, can lead to single platforms, Amazon being the leading example. These externalities need not be exclusive or exhaustive; Google as a search engine built on feedback from user links and as a platform for selling advertising has aspects of both. And some firms, Apple for example, may face high level competition in markets for mobile devices yet stand accused of maintaining monopolies over services within their ambit, for example, requiring application developers to use Apple’s App Store with Apple getting a fixed percentage of revenues from any in-app purchases.

Competitive considerations fall well within standard economic frameworks, which may explain in part why big tech critics choose to express their concerns as antitrust violations. However, alternate frameworks may suggest other rationales for policy interventions into the conduct of these and other firms in the “big tech” arena. One such potential framework, motivating this symposium, is behavioral economics.

I have some suggestions where insights drawn from or related to behavioral economics may be relevant to present policy concerns. I need to begin, however, with something of a disclaimer — I am skeptical of the usefulness of behavioral economics for policy, or for economics for that matter. After briefly discussion some of the sources of that skepticism, I nevertheless find some potential justifications for “big tech” regulation from these insights. Three I focus on here are (1) the realization that thinking can be costly, (2) the possibility that people may not act according to their “true” preferences, and (3) the effect of present actions on the creation of future preferences. Identifying potential insights does not imply identification of effective regulatory or other types of policies to address them. This may be most true for the effect of social networks on social fragmentation, which is perhaps of the deepest concern and, in my view, has little if anything to do with behavioral economic considerations.

 

I. SETTING THE CONTEXT: BEHAVIORAL ECONOMICS’ LIMITATIONS

Behavioral economics or ideas related to it may offer useful perspectives on big tech regulation. Those perspectives may be useful without having to accept behavioral economics as a generally useful contribution to policy or to economics. I am skeptical for a number of reasons.[2]

A first is that behavioral economics conflicts with the requirements of benefit cost analysis (“BCA”). While BCA, right or wrong, generally has played only a relatively minor part in competition law, it is central to regulation, at least in the U.S.[3] BCA requires monetary measures of how much the benefits are worth to people and the burden of any costs. Ascertaining these values requires that (1) the benefits and burdens are measured by peoples’ willingness to pay or those benefits or to avoid those burdens, and (2) that such willingness to pay is revealed by actual willingness to pay from markets or surrogates.[4] Behavioral economics breaks both links in this chain, by claiming that because of cognitive biases, willingness to pay differs from the “true” value to persons, or that revealed willingness to pay differs from actual willingness to pay. That may be correct, but if so, BCA is left without empirical foundation.[5]

Considering alternatives to BCA points out another troubling implication of behavioral economics — who gets to make these decisions? An attractive feature of BCA and economics in general is that decisions are based on what people want, rather than what any individual in power wants. There are ways to reconcile behavioral economics with having decisions to a particular person, which might be called rational delegation, that is, people deciding that they would rather leave certain decisions affecting their lives to the expert, which they choose. (I will return to this below.) That may work, although if people’s decisions are biased regarding outcomes, might they be biased regarding delegation as well? It is too short a distance from claiming that persons’ decisions are biased to the view that I (or whoever) am uniquely free of bias and thus should get to make the decisions.

To be honest, though, my most strongly felt objection to behavioral economics is that it is a form of “throwing in the towel” and giving up, in an intellectual sense. If someone does something we do not understand, we need not try to explain it — we need only assert a bias.[6] Had that been the standard recourse over the decades, we might not have come up with economic analysis of incomplete markets, asymmetric information, strategic decision making, and other ideas that moved economics beyond the basics in introductory courses.

However, thinking about the questions posed by behavioral economics does lead to some possible rationales for big tech regulation that lie outside that conventional economics box.

 

II. THINKING MAY BE COSTLY: DELEGATING CHOICES

The lesson from behavioral economics most complementary to standard economics is that thinking may be costly. We already understand without the need for explanation that physical activity can be costly. We have elevators so we do not have to take the stairs; we have cars, so we do not have to walk. Similarly, we develop generally reliable “rules of thumb” to avoid having to think through all possible consequences, for example, inferring from how choices are usually presented what the preferable option is likely to be.

Many of the leading experimental findings supporting behavioral economics could be interpreted as fooling subjects through unexpected framing. It may be reasonable to expect that the default option is the one most people like, therefore that people are likely to choose it if figuring out pros and cons is costly. In response to unexpected framing, such as randomly assigning the default option to particular choice — opt-in or opt-out of an employer-subsidized pension[7] — it should not be surprising that people choose the default rather than what they might prefer, if determining the preferred choice require costly thought. This kind of result is no more paradoxical than an experiment watching people stand in front of an elevator they do not know is not working, for more time than it would have taken to use the stairs.

If thinking is costly, it is not hard to imagine that there may be economies of scale in studying a situation to determine the best outcome. Markets can and do respond to this, for example, buyers relying on a store to evaluate quality on the goods on their shelves so they do not have to.[8] However, if the scale economies are large enough or there are adverse selection problems with intermediaries conveying their expertise to buyers, there may be room for the government to be do this thinking. Such a rationale lies at the heart of consumer protection regulation, recognizing the possibility that sellers may mislead consumers by how they structure choices just as they may mislead consumers by what information they do or do not provide.

This conception has obvious application to big tech regulation. If privacy or data disclosure policies are too difficult to think through, the government can establish default rules for them. This is not unprecedented; uniform commercial codes, landlord/tenant contracts, and other settings follow general rules and are not left for all parties to think through all implications. Arguably, the foundation of the economic approach to contract law — that contracts may be incomplete and thus require judicial interpretation — itself is a manifestation of thinking being costly.

There are two qualifications. A first is that policy makers with the authority to set privacy and data disclosure rules need to understand the benefits to users and the economy as a whole from obtaining and offering access to user information and the costs of enforcing disclosure policies.[9] A second is that to the extent that people have different relevant preferences — some care more about privacy than others — such regulation should perhaps be designed with opt-out provisions so those willing to think through the pros and cons to them can choose a different regime. In general, the more divergent are user preferences in any context, the less likely that a uniform default rule will be appropriate

A second tech regulation policy issue to which the cost of thinking is relevant is quality control and content moderation. If users of a service would prefer that the information they see is accurate, they may prefer having the content provider ensure accuracy rather than expend the effort to do so themselves. This suggests that policy makers may impose costs on users if they prevent content providers, even large ones, from suspending the accounts of purveyors of falsehoods.

 

III. WEAKNESS OF WILL: LIMITING OPTIONS

A second conception of behavioral economics is that people make mistakes in the pursuit of their own ends. The hard part is distinguishing mistakes from preferences that an outside observer may not understand. For economics-based regulation, as in typical merger assessment, one should take revealed preferences as real, e.g. if people regard X and Y as different even if “rationally” they should be regarded as close substitutes, then X and Y are not in the same market. [10] Some other regulatory avenues can attempt to inform consumers of the possibility of a mistake. But if after being informed consumers continue to do the “irrational” thing, treat it as a preference.

A more compelling idea that goes outside the standard economics box is the notion that people may not want to act in accord with their predicted future preference. To do so, they “precommit” to limit their future options. The archetypal precommitment story is Ulysses binding himself to the mast to prevent his being lured by the Sirens.[11] A less dramatic example would be paying in advance for a gym membership, rather than paying for each visit, to reduce the cost of going and make it more likely that one will exercise.[12] “Weakness of will” can be thought of as wishing one could precommit to a course of action that one knows or suspects one will not take when that time to act comes about.

Precommitment raises questions beyond standard economics because its tools cannot determine whether the preferences at the time of precommitment or the preferences when the precommit would limit choices. Consider X, who shares an apartment with Y. X wants to lose weight, so tells Y to lock the refrigerator after X eats a salad for dinner, so X will not be tempted to snack on ice cream at midnight. Midnight comes, and X asks Y for the key. From an efficiency standpoint (assuming Y is indifferent about X’s weight), why shouldn’t Y give X the key? Economics alone cannot tell us whether X’s dinner time preferences, or midnight preferences, should be controlling.

Precommitment plays a role in public policy and could rationalize some aspects of big tech regulation. One can view drug laws as means not for me to prevent you from taking harmful drugs, but as means to prevent me from taking them.[13] One could imagine regulations as precommitment methods to address concerns that using big tech devices or applications can be addictive. While one hears concerns along these lines,[14] it is admittedly not any clearer how to do that than it would have been to get people (me, that is) to spend less time watching television, in the days before the Internet and smartphones.

 

IV. PREFERENCE FORMATION: WHO WILL WE BE?

Standard economics takes preferences as given. However, they have to come from somewhere. One can go past behavioral economics and more overtly into psychology to consider the empirical determinants of preference formation — essentially, who we are. This too is not new to big tech. Part of the purpose of public education is to inculcate dispositions to civic norms. One can view support for the arts not just as a way to deliver certain cultural goods to those willing to pay for them, but as a way to influence what we will want and expect of society in the future.[15]

It is outside my expertise to know how the pervasiveness of big tech enterprises today will influence the culture and people of the future. But it is hard to imagine that there will be no effect. That said, I have no idea whether one should or even how one could usefully regulate big tech to move society in some particular direction. The intensity of continuing culture wars at all levels of education, from public school boards and libraries to university classrooms and faculty gatherings, illustrates just how controversial preference formation policy can be, even before we know how preferences get formed.

 

V. FRAGMENTATION AND POLARIZATION: NOT NEW, BUT WHAT TO DO?

The last observation may have little or nothing to do with behavioral economics insights into thinking costs, precommitment to prevent acting on future desires, or preference formation, at least as a necessary matter. It is that big tech in various ways fosters and activates potentially destructive fringe communities.  

In some ways, this concern is not new. To the extent that people view “news” as a means to reinforce prior predispositions than to acquire shared knowledge, audience fragmentation has been a concern ever since multi-channel TV delivery washed away the three-network era. It became more profitable for many outlets to differentiate themselves through reinforcing minority viewpoints than address the median interests in information. This is largely consistent with (and perhaps a downside of) competition.

In this regard, however, the current big tech environment is exponentially more problematic. Not only is the audience fragmented, but social media allows communication, belief reinforcement, and the planning of potentially explosive events to take place within that audience fragment. Communication is not just one way, from the outlet to a passive audience. Consider that if only a tenth of a percent of the U.S. population has some extreme belief, that’s 330,000 people — considerably more than enough to storm the Capitol, as on Jan. 6, 2021. My strong sense is that the Capitol insurrection is more than the result of thinking costs, failure to precommit, or presence formation itself. Rather, it is the enabling of coordination among those with extreme viewpoints that is new and crucial.[16]

Many do not like this, but it is not clear what if anything can be done about it, other than ex post law enforcement. Social media are here to stay. Perhaps bans on false information and its purveyors would help, but that is both an enforcement nightmare and, at least in the U.S., likely to run afoul of constitutional protections of free speech. Competition considerations, abetted by considerations relating to costs to users of thinking through privacy, data security, and information veracity, may be useful. But the most serious problems in this regard are likely to remain impervious to big tech regulation.


[1] Professor Emeritus, School of Public Policy, University of Maryland, Baltimore County, Baltimore, MD, USA. Email: brennan@umbc.edu.

[2] Many of the points below are argued in more detail in Timothy Brennan, “Behavioral Economics and Policy Evaluation,” 5 Journal of Benefit-Cost Analysis 89 (2014), Timothy Brennan, “Behavioral Economics and Energy-Efficiency Regulation,” 59 Network 1 (2016), and Timothy Brennan, “The Rise of Behavioral Economics in Regulatory Policy: Rational Choice or Cognitive Limitation?” 25 International Journal of the Economics of Business 97 (2018).

[3] Office of Management and Budget, Executive Office of the President, Circular A-4, Regulatory Analysis, 58366 Federal Register / Vol. 68, No. 196 / Thursday, October 9, 2003 /, following bipartisan Executive Orders requiring the use of benefit cost analysis in regulatory assessment. Dudley et al., “Consumer’s Guide to Regulatory Impact Analysis: Ten Tips for Being an Informed Policymaker” 8 Journal of Benefit Cost Analysis 187 (2017).

[4] Carrying out the second step is often difficult. As regulation is designed to correct the failure of markets to reflect certain values, such as the willingness of people to pay for a cleaner environment or safer highways, indirect methods for measuring willingness to pay outside market prices are typically required.

[5] Cass Sunstein, “Cognition and Cost-Benefit Analysis,” 29 Journal of Legal Studies 1059 (2000), argued that behavioral economics and BCA can be reconciled, but that argument was only that persons’ errors justify substituting government decisions for their own. He did not show what data would be used to justify those decisions in place of market data based on putatively erroneous decisions.

[6] Violating my general request to students that they not cite Wikipedia, Wikipedia lists (if I counted correctly) 88 cognitive biases in 13 categories, with another 37 classified as “Other.” Wikipedia, List of Cognitive Biases, https://en.wikipedia.org/wiki/List_of_cognitive_biases, accessed 16 September 2022.

[7] Shlomo Benartzi & Richard Thaler, “Heuristics and Biases in Retirement Savings Behavior,” 21 Journal of Economic Perspectives 81 (Summer, 2007).

[8] This consideration could support some contentious big tech activities, e.g. smartphone users preferring Apple because it insists that only apps it approves can be made available for iPhones.

[9] Michal Gal & Oshrit Aviv, “The Competitive Effects of the GDPR,” 16 Journal of Competition Law & Economics 349 (2020).

[10] Timothy Brennan, “Behavioral Economics and Merger Enforcement: A Speculative Guide,” 9 Threshold: American Bar Association Mergers and Acquisitions Committee 21 (No. 2, 2009).

[11] Jon Elster, Ulysses and the Sirens: Studies in Rationality and Irrationality (1979) is perhaps the leading discussion of precommitment in the social sciences literature, and surely the most engaging.

[12] Jon Elster, “Weakness of Will and the Free-Rider Problem,” 1 Economics and Philosophy 231 (1985).

[13] Elster uses the term “self-paternalism.”

[14] See, for example, Sehar Shoukat,Cell phone addiction and psychological and physiological health in adolescents,” 18 EXCLI J. 47 (2019).

[15] This argument is touched on in Timothy Brennan, “The Trouble with Norms,” in Koford, Kenneth & Jeffrey Miller (eds.), Social Norms and Economic Institutions 85 (1991).

[16] Lest this seems politically one-sided, one could wonder what demonstrations in opposition to the Viet Nam War might have looked like had organizers had the same ability to plan via social media as the far right has today.