The 2022 revision of the Merger Guidelines is likely to introduce a number of controversial changes; this paper focuses on one potential change, the elimination or marginalization of price modeling in merger analysis. Recognized as useful in the 2010 Guidelines, these models have been applied, with mixed results in a number of recent litigations. However, the analytical disconnect, between using these static models to predict price and innovation being the driving force of product differentiation, raises concerns. In some situations, the product design is the core aspect of competition, while in other situations price and non-price conditions of sale interact such that the idea of firms dictating prices for differentiated goods is an illusion. By exploring the foundational models of price modeling, it is possible to offer some insights. In conclusion, returning unilateral effects analysis to its historical focus on the totality of the evidence is a good idea. 

By Malcolm B. Coate[1]

 

I. INTRODUCTION

The expected revision of the 2010 Merger Guidelines is likely to result in changes to the merger review process at the enforcement agencies, some of which were signaled by official commentary. In a recent speech, Assistant Attorney General Jonathan Kantor raised three issues with the consumer welfare standard; the approach that controlled antitrust enforcement for the last 40 years.[2] His first concern noted the consumer welfare standard offered no protection against unconstrained corporate power, an antitrust goal with an even longer history.

The second concern involved the evolution of consumer welfare analysis from a qualitative review of the factual evidence to theoretical simulation of post-merger pricing. These modeling techniques, colorfully characterized by Kantor as the “central planning standard,” were introduced into the 2010 Guidelines and have been actively employed in merger litigation, with mixed success.

The third problem focused on the inability of the consumer welfare standard to address labor market issues. To this economist, the first and third concerns are related to an oversimplification of the consumer welfare standard and can be addressed on a case-by-case basis.[3] However, the second concern, the Agencies’ broad-based application of price simulation modeling represents a more serious problem. Too often, mathematical modeling of competitive concerns will apply only a veneer of quantification to the complex competitive questions raised by the merger.

Thus, price modeling could either miss actual competitive concerns, or manufacture artificial problems that exist only in theory. Although one could argue that the potential for error is acceptable for mature (legacy) markets, many of the most interesting mergers filed in the United States involve dynamic marketplaces, in which price is simply one competitive tactic employed by the firms in the market. Moreover, even the idea of a legacy industry is becoming obsolete, as innovation drives change across the economy.

Chicago economists have long known that Nash-Bertrand price models such as those endorsed in the 2010 Merger Guidelines are only “possibility models” that define what could happen, not “generalizing models” that define how specific structural changes (e.g. a merger to monopoly in a market characterized by barriers to entry) lead to anticompetitive effects (e.g. higher prices).[4] Inapplicable price analysis is likely to generate flawed predictions on consumer welfare, and thus this “central planning standard” fails as a general policy tool. Merger simulation proponents would likely respond by noting that their price models only relax two of the assumptions (homogeneous goods and large numbers of firms) in the foundational competitive model and then use a little math to derive a generalizing competitive equilibrium for a post-merger world. Both changes are thought to make the competitive model more realistic, as consumers clearly demand differentiated products and ubiquitous economies of scale obviously preclude large numbers of firms. This paper suggests that the competitive complications introduced by product differentiation generally preclude the use of static price-based modeling structures.  

To understand the problem, it is helpful to explore the economic foundations of price modeling (technically Nash-Bertrand analysis). In effect, theorists blend three classic methodologies, Bertrand’s model of market equilibrium, Chamberlin’s model of product differentiation, and Lerner’s model of monopoly pricing together, sprinkle in a little game theory, and voila, they have produced, a price model of competition for differentiated products. The problem with this analysis is that each classic model has specific limitations that affect their applicability, and these limitations are often lost in the blending process.  Once one understands the concerns, the Nash-Bertrand price model must be seen as only a possibility model. Economic price modeling may very well work for a few markets, but special cases should not be written into the Merger Guidelines and certainly should not be applied simply because some type of price data is available.[5] Instead, the Guidelines should focus on defining how to build a general understanding of the competitive process potentially affected by a merger.

 

II. EVOLUTION OF THE DIFFERENTIATED PRODUCTS PRICING MODEL

The Nash-Bertrand pricing model combines Bertrand’s assumption that firms set price based on the assumption their rivals will hold price constant in response to their actions, with Chamberlin’s characterization of differentiated products, and Lerner’s monopoly analysis to define what is alleged to be a more realistic characterization of competition. To better understand the modeling process, it is first necessary to present the background for all three analyses.

A. Bertrand’s Model of the Competitive Process

In his commentary on the Cournot model, Bertrand recognized that the Cournot’s duopoly equilibrium would change if instead of competing on quantity, the two firms set price while assuming the rival competes on price.[6] Instead of converging to a price noticeably above the competitive level, the duopoly price is competed down to the competitive level. Formally, the Bertrand conjecture with homogeneous goods and constant costs mirrored the optimization process that occurred within a perfectly competitive model in which firms set price based on the assumption that their rivals will not directly respond with a price adjustment. Each firm performs the thought experiment simultaneously, updating their observations on rival’s decisions and repeating until the market reaches equilibrium.

One important issue with the Bertrand methodology is the inability of the static modeling structure to offer insights into a competitive adjustment process. For Bertrand competition with homogeneous goods and constant cost firms, entry is not considered, because no profits exist once the market reaches equilibrium. For example, changes in input costs lead to changes in marginal cost and therefore price and firm-level output, but no profit exists to trigger entry. Likewise, reductions in the number of rivals does not trigger entry as long as at least two rivals remain. Thus, the simple version of the Bertrand model seems unable to address the introduction of new firms, hardly a surprising result for a model from the 19th century. However, entry is a key consideration in merger analysis, and a modeling structure that abstracts from entry is a problem for more realistic competitive environments.  

Second, in contrast to the “invisible hand” concept of perfect competition, economic models rely on the strategic decisions of the competitors to generate the equilibrium. This modeling structure transforms the competitive process to allow firms to set price, while customers are only allowed to make purchase decisions. Although such a strategic assumption may be reasonable in certain fact scenarios, restricting strategic behavior to one side of the market is clearly a limiting assumption.

B. Chamberlin’s Model of Product Differentiation

Chamberlin asked what would happen if each firm sold a differentiated product, instead of the homogeneous product envisioned by perfect competition.[7] To simplify the analysis, each firm’s product was considered to be a symmetric substitute for every other product. By modeling the competitive process as a collection of firm-specific demand and supply (marginal cost) curves, Chamberlin recognized that each firm would raise price above marginal cost, potentially earning a profit. In effect, each market would include a collection of micro-monopolists, each raising price above cost and thereby restricting output below the “efficient” level. Profits would be earned as long as price exceeded average total cost. Chamberlin “closed” the model by allowing entry to occur in response to profits, thereby shifting firm-level demand curves, lowering prices and eliminating profits. A “perfect” product differentiation model could be characterized by a zero-profit equilibrium, but this structure is alleged to result in social welfare loss, as firms did not set price equal to marginal cost.[8] As one would expect, economists have expanded the product differentiation literature over the last 90 years, with numerous studies offering alternative modeling techniques for product differentiation.

Chamberlin’s analysis also contains a number of implicit assumptions that drive the analysis. First, product differentiation is assumed to be exogenous to the competitive process, an assumption more acceptable for the 1930s than the 2020s. Instead of modeling differentiation as a form of innovation driven by the profit opportunities in the market, Chamberlin assumed differentiation simply existed and therefore, was not generated by innovation in style or quality that intentionally improves consumer welfare through the exercise of entrepreneurship. By imposing the exogenous differentiation assumption, Chamberlain’s model is able to offer initial insights into the effect of exogenous product differentiation on the competitive process. On the other hand, modern models that assume differentiation is exogenous on the product side offer limited insight into the competitive process.  

Second, the differentiation assumption is imposed with little consideration of consumer informational problems likely to occur once products become too complex for the standard assumption of perfect information to be credible. Chamberlin’s analysis did allow for marketing costs associated with the firm creating new product features, but this analysis falls short of addressing complex information problems. Obviously, claiming the product offers certain consumer benefits, does not mean the consumer believes the benefits exist, since the consumer needs some reason to believe the firm’s representations. Thus, it seems reasonable to assume the customer will behave strategically to obtain the required information at a low cost.   

More importantly, posted price models may not even be credible once firms sell complex products that are differentiated with respect to a wide range of functionalities and services. As sellers will interact with buyers, it is possible, or even likely, to see price being set through complex negotiations. Such negotiations may not be easy to represent as auctions, because each firm’s product is not necessarily completely defined prior to the negotiations process. Firms and customers could negotiate on characteristics of the differentiated product. For example, the customer may want to define a specific delivery schedule, agree on a customization of the physical product design, obtain a commitment for rush orders, when necessary, mandate rapid follow up on technical glitches, or guarantee some type of supply when shortages exist. On the other hand, the producer may negotiate for substantial volume commitments, feedback on the product’s functionality when used in the customer’s manufacturing process, or specific acceptance of shipment dates. Price negotiations occur in the light of these complexities and each firm is likely to customize the transaction to the specific customer. Even if firms post price lists, those numbers may have little economic meaning without an understanding of relationships between buyers and sellers.

C. Lerner’s Monopoly Model

Lerner’s paper searched for a monopoly index.[9] He started with a representative firm in a perfectly competitive market, assumed a monopolist to “roll up” all the firms in that market, and then asked how the monopolist would behave. After a lengthy discussion, Lerner’s “thought experiment” arrived at his classic definition of monopoly power as the ratio of price minus marginal cost to price. By replacing marginal cost with marginal revenue, Lerner observed that his monopoly formula equaled the inverse of the firm’s elasticity of demand. In effect, Lerner’s model of the monopolist’s optimal pricing decision defined the polar opposite of perfect competition.[10]

Implicit assumptions are also important in Lerner’s paper. First, Lerner’s monopoly model is inherently static. Demand and cost conditions can change but the monopoly control over price remains. This would be a reasonable assumption for a model of a socialist take-over of an industry, but is quite restrictive for a more general competitive process. For example, Lerner’s modeling construct imposes the assumption that only one firm exists, thus no potential for competitive prices exists within the market. Moreover, the Lerner model focuses only on marginal costs, a restrictive assumption that can lead to policy errors in more complex competitive markets.[11] Static modeling, and the focus on marginal costs seem reasonable for Lerner’s goal of a benchmark model of monopoly, but its practical uses seem limited.

Second, by design, Lerner’s model does not explicitly address the potential for exogenous considerations to affect the monopolist’s pricing. Lerner recognized the restrictive nature of his result and cautioned that “it would be best to consider this [the Lerner equation] as a special case,” and his equation offers only limited insight into the competitive process.[12] Lerner lists three examples of situations in which his equation need not define the market price (non-economic objectives of the firm, responses to political pressure, and entry deterrence). Numerous other examples of the equation not predicting price can be easily posited based on various customer counter-strategies to mitigate monopoly power over time. Simply put, Lerner recognized that actual economic decisions are often more complicated than the static math would suggest.  

 

III. IMPLICATIONS OF THE ASSUMPTIONS FOR STATIC PRICE MODELS

Insights from Bertrand, Chamberlin, and Lerner set the foundation on which the game theoretic price models for differentiated products are based. However, when reaching back in time to make use of those classic models, it is also important to address the clear limitations of the analyses and integrate the limitations into any policy predictions generated by the models. Although theorists often attempt to justify their analyses after a price simulation is complete, such an approach is backwards. A careful industry analyses is necessary prior to the application of the price model, and such analyses are likely to identify issues that significantly complicate the modeling process. In merger review, price (simulation) modeling should be an after-thought, useful only in special case situations. Two important insights, relevant to a competitive analysis, are identified by reviewing the classic theories. 

First, the three classic models are all relatively static in nature, with only Chamberlin offering an endogenous entry process similar to that used in perfect competitive. Both the Bertrand and Lerner models incorporate a competitive adjustment process within their core analysis, thereby abstracting from the entry issue. Bertrand’s model expects constant cost firms to expand to clear the market, while Lerner’s model adjusts the monopoly price to clear the market. In effect, all three models offer no real dynamic insight into the competitive process.[13] One would think that this is a show-stopper for a modern product differentiated model, because differentiated products generally involve some form of innovation to better meet consumer demand.

Simply claiming merger policy focuses only short run (static) price increases seems problematic when the model’s goal is to explain pricing of differentiated products. Technically, the Lerner index remains an equilibrium condition in a dynamic pricing model (one that sets the optimal price for each time period in light of the effects of that price on future prices), but the Lerner equation could contain any number of shadow prices that exist in the dynamic equilibrium, and these considerations are not included in standard static implementations.[14] Thus, although general Lerner analysis exists in theory, practical insights are available only the simplest special case situations in which the dynamic and static equations are basically the same.[15]  

Second, all three models make clear assumptions on how firms and customers interact in the economy. Firms post prices and customers decide to buy or not to by. In technical terms, firms are strategic players when they choose price, but customers remain passive. However, this is simply a modeling assumption and must be validated by evidence for the price model to be useful. Once price negotiation is allowed, it seems necessary to model the possible interactions between the buyers and sellers. Commitment contracts offered by buyers could preclude marginal pricing, as the customer offers to buy substantial product at a set price. Overall, negotiations create the potential for a wide range of competitive outcomes, with pricing just one of many considerations that must be addressed.[16]

A more complicated model would consider the potential for buyers to endogenously react to a material change in market structure. Here, even if the analyst had a mathematical representation of the pre-merger competitive process, strategies employed by the buyers could change after the merger in consummated. Again, marginal analysis could be of little use. Without the price setting assumption, even a dynamic version of Lerner-based analysis is problematic.

Because the theory suggests that price modeling often fails to add insight into the competitive analysis, it appears reasonable to remove or marginalize references to this modeling in the Merger Guidelines. Unilateral effects analyses remain important, but should be based on qualitative factual evidence, such as those undertaken in the years between the release of the 1992 and 2010 Merger Guidelines. The next section explores whether this change is likely to limit agency success in litigation.

 

IV. PRICE MODELING IN LITIGATED MERGER CASES

Mathematical price modeling has achieved mixed success in the courts. In Oracle, the court offered a primer on unilateral effects analysis; identifying the key factors in the model. In particular, the plaintiff must show differentiated products, the merged products are close substitutes, the merged firm can impose a material price increase, and repositioning in response to the price increase is unlikely.[17] The plaintiff’s price model did not meet all these requirements and was rejected. A few years later, the court in CCC Holdings evaluated a price model, and again, rejected the analysis because the model “cannot be reasonably confirmed by evidence in the record.”[18]

During the government’s 16 horizonal merger winning streak, from late 2011 through 2019, unilateral effects modeling had more success, with viable presentations in eight matters.[19] Usually, the analysis was considered as supplementing the merger review, but modeling played a more prominent role in Aetna (offering empirical support for the structural concerns) and Anthem (addressing the balance between anticompetitive effects and efficiencies).[20] More recently, price models have been less successful, with models compatible with the overall evidence in two matters, but rejected in three other cases.[21] Overall, it is hard to argue that price (simulation) modeling has played an important role in the litigation decisions, and thus, removing it from the Guidelines is unlikely to affect the Agencies success. In effect, because the courts evaluate the totality of the evidence available in the record, it is the case law – not the modeling or the Merger Guidelines – that controls the merger review process.   

 

V. CONCLUSION

Summing up, game theoretic price models (i.e. those associated with the “the central planning standard”) may fail to represent competitive realities and therefore generate poor predictions of merger-related price effects. Thus, removal or marginalization of price modeling from the 2022 Merger Guidelines is clearly warranted. Of course, analysts can still use the models in their work, but should prove, at a minimum, the marketplace is relatively stable and the assumption of posted prices is credible. These requirements seem most likely to be met in moribund legacy markets involving sales to consumers through distribution chains that do little more than markup prices. Possibly the cigarette industry is an example of such a dying industry. Broadly defined, economic considerations remain the controlling intellectual authority in merger review. Market definition defines the playing field, with structural considerations, ease of entry, insights on competitive effects and efficiencies, taken together determine if the merger is likely to substantially lessen competition.


[1] Malcolm B. Coate is an economist in the Washington D.C. area, formally employed by the Federal Trade Commission.

[2] Jonathan Kantor, “Remarks at the New York City Bar Association’s Milton Handler Lecture,” May 18 2022.  In a later speech, Kantor referred to price modeling as a “sometimes artificial exercise.” Jonathan Kantor, Keynote Speech at Georgetown Antitrust Law Symposium,” September 13, 2022.

[3] Corporate power can be addressed under the consumer welfare standard by evaluating the welfare losses associated with problematic conduct.  Although such behavior would lower corporate profits, in theory, a monopolist has no need to optimize. Harvey Leibenstein, Allocative Efficiency vs. X-efficiency, 56 Am. Econ. Rev. 392 (1955). To take a very recent potential example of corporate power, big technology companies may have censored speech on medical issues at the bequest of government officials. This could be considered a cartel orchestrated by bureaucrat ringmasters. See, https://nclalegal.org/2022/03/ncla-takes-on-u-s-surgeon-generals-censoring-of-alleged-covid-19-misinformation-on-twitter/. As for labor market issues, monopsony is an antitrust concern and affected workers could be protected given the required facts.

[4] Franklin M. Fiaher, Games Economists Play: A Noncooperative View, 20 RAND Journal of Economics, 113-124 (1989); Sam Peltzman, The Handbook of Industrial Organization: A Review Article, 99 Journal of Political Economy, 201-217 (1991).

[5] Coate and Fischer study a sample of 92 differentiated markets and identify 41 markets in which the competitive process was clearly dynamic and seemed driven by non-price considerations. In 51 markets, price was the key aspect of competition  However, sufficient price data was available in only 12 of those mergers. Even here, the firm had to be assumed to dictate price for a price model to be appropriate. Malcolm B. Coate & Jeffrey H. Fischer, Is Market Definition Still Needed After all these Years, 2 J. Antitrust Enforcement 422 (2014).

[6] Joseph Bertrand, Book review of Theorie Mathematique de la Richesse Sociale and of Recherches sur les Principles Mathematiques de la Theorie des Richesses,67 J. de Savants 499–508 (1883). https://dl.dropboxusercontent.com/u/9050876/Bertrand1883.pdf.

[7] Edward Chamberlin, The Theory of Monopolistic Competition (Harvard, 1933).

[8] This social loss conclusion was (obviously) controversial, and was developed more in the commentary than in the original article. Minimal consideration of the problem would identify an oversight in the informal welfare discussion, because welfare gains from differentiation would need to be balanced against the welfare losses associated with pricing above cost, leaving social efficiency as an empirical question.

[9] Abba Lerner, The Concept of Monopoly and the Measurement of Monopoly Power, 1 REV.

ECON. STUD. 157 (1934).

[10] For an interesting commentary on the application of the index, see, Kenneth G. Elzinga, and David E. Mills, The Lerner Index of Monopoly Power: Origins and Uses, 101 Am. Econ. Rev. 558 (2011).

[11] For model-agnostic approach to balancing estimated price and efficiencies over time see, Joseph J. Simons, and Malcolm B. Coate, “A Net Present Value Approach to Merger Analysis.” 2022, Available at SSRN: http://ssrn.com/abstract=4104499 2022).

[12] Lerner, supra note 9 at 170.

[13] Generalizing a Bertrand model for product differentiation creates the potential for entry to reestablish a zero-profit equilibrium, but such an analysis is complicated by the exogenous nature of the differentiation assumption. If one allows the product differentiation to be endogenous, the static modeling structure is problematic.

[14] Optimization models often include constraints on the choice variables that limit the values that the choice variable can take on. Shadow prices enforce these limits in the first order(optimization) condition. 

[15] Even here, an assumption is necessary to determine the time period over which marginal cost is measured. The longer the period, the more costs become marginal.

[16] Once competition exists in multiple dimensions (e.g. price, product support, delivery services, and supply assurance) any given product may face close competition from different rivals with respect to different product characteristics. Thus, a firm may have a number of close competitors and diversion ratios may have little relevance to the analysis.  

[17] U.S. v. Oracle, 331 F. Supp. 2nd 1098, 1113-1121 (N. D. Cal. 2004).

[18] FTC v. CCC Holdings, 605 F. Supp. 2nd 26, 72 (D.D.C. 2009).

[19] For citations for the cases, see, Malcolm B. Coate, “Innovations in the 2010 Merger Guideline: Theoretical Foundations, Legal Overview, and Impact of the Changes on Merger Analyses,” Available at SSRN: http://ssrn.com/abstract=3988160 (2021).

[20] Judge, now Justice, Kavanaugh dissented on the Anthem appeal. He observed that the relevant competitive process involves insurers negotiating with providers and the demonstrated savings from discounts the post-merger firm would obtain from providers implies the merger is procompetitive. This suggests that Justice Kavanaugh would have accepted an efficiency defense. For a discussion on efficiency presentations, see, Malcolm B. Coate, Malcolm B. and Arthur Del Buono, “A Commentary on Presenting Efficiencies in a Horizontal Merger Review,” 2022, Available at SSRN: http://ssrn.com/abstract=4127486 (2022).

[21] In the T-Mobil merger, the court concluded the decision with an extensive discussion of competition in dynamic markets; the static pricing model presented in this case ignored all these considerations and therefore offered no value to the analysis. State of New York v. Deutsche Telekom, No. 1:19-cv-05434 (S.D.N.Y. February 11, 2020) at 144-151.