Conjoint Analysis: No Silver Bullet for Calculating Class-Wide Damages
Over the last few years, “conjoint analysis” has become the methodology du jour for false advertising plaintiffs seeking to demonstrate they can calculate class-wide damages. Conjoint analysis is so named because it is used to study the joint effects of multiple product attributes on consumers’ choices. At bottom, conjoint analysis uses survey data to measure the strength of consumers’ preferences for particular product features. Or, put differently, it tries to isolate how much people care about an individual product attribute in a multi-feature product (in a more scientific manner than just asking them directly).
The technique was not designed for quantifying damages in litigation. It developed in the field of market research as a tool to help businesses optimize their products. For example, assume that an auto manufacturer is trying to design a maximally profitable car. It needs to add enough features to the car that consumers will find it attractive, while keeping it cheap enough that consumers can afford it. What is the optimal balance of features? The manufacturer might conduct a conjoint analysis help guide its selection. Thus (to give a simplistic example), the manufacturer might design a conjoint survey to gauge the relative impact on consumer preferences of (a) average miles per gallon, (b) color, (c) seat covering, and (d) price.
There are multiple “types” of conjoint analysis—ranging from “full-profile analysis” (where survey respondents rank product profiles from most to least preferred) to “adaptive conjoint analysis” (where the survey is customized in real-time for each respondent, based on her answers). But the most popular form in class action litigation is “choice based conjoint analysis.” For this type of survey, participants are given verbal descriptions of the relevant product attributes (e.g., seat covering) and the different variations available for each (e.g., leather, pleather, and fabric). They then undertake a series of “choice exercises” in which several hypothetical products are described to them, each with a different combination of features, and the participant must pick the product he or she prefers most.
So, using our car example: in the first round, survey takers might be asked to select from among these three cars (labeled Option 1, Option 2, and Option 3):
In the next round, participants might be asked to select from among a different set of three options:
This would continue for many rounds. In theory, survey respondents will be forced to make tradeoffs between different product attributes, approximating the way consumers make purchase decisions in real life. Using the second choice exercise above as an example: I might prefer leather seats and good gas mileage, but the survey will implicitly force me to decide if those preferences are worth the $8,000 price difference between Option 1 and Option 3.
After a respondent completes all the choice exercises, a computer crunches the data and calculates his or her so-called “part worths” for each attribute value (e.g., “leather seats” or “green color”)—in other words, it calculates amount that attribute value contributes to his or her overall “willingness to pay” (WTP) for cars. For example, other things equal, my survey responses might reveal that I am willing to pay $2,000 more for a car that gets 40 mpg compared to one that gets only 25 mpg. In turn, market researchers can combine the results of many survey respondents and attempt to estimate these “part worths” for the population as a whole.
Enter false advertising class actions. In Comcast Corp. v. Behrend, the Supreme Court required plaintiffs seeking class certification to identify a method for calculating damages on a class-wide basis. To meet that requirement, false advertising plaintiffs have increasingly turned to conjoint analysis. Returning to our car example, if a manufacturer advertised its car as averaging 40 mpg, but in reality it only averaged 30 mpg, plaintiffs might use conjoint analysis to determine how much more consumers as a whole are “willing to pay” for an extra 10 miles per gallon. Multiply that WTP differential by the number of cars sold during the class period, and—presto—there are the class-wide damages!
If you think that seems a bit too simple, you’re right. There are a number of serious problems with conjoint analysis when used as a damage model in false advertising cases. For brevity’s sake, this post will discuss just four of them.
First, even if conjoint “choice exercises” are more realistic than asking consumers point-blank how much they would be willing to pay for a specific feature in isolation, they still do not resemble how consumers shop in real life. Choice-based conjoint surveys explicitly call out several product attributes, force survey participants to think about them, and give them all equal prominence in the choice exercises. But especially for low-cost goods like foods and beverages, consumers do not shop by comparing “lists of attributes” where each feature is given equal prominence. They make relatively quick decisions based, in many cases, on the overall look or feel of the product or positioning in the store. Courts have time and again reminded parties that consumer surveys must accurately simulate real-world conditions in order to generate reliable results, and most conjoint surveys do not.
Second, the results of conjoint analyses are subjective. In a false advertising case, damages are supposed to reflect the amount by which the actual market price of the product was “inflated” as a result of the false advertisement. But what conjoint analysis measures—willingness to pay, or WTP—corresponds to consumers’ subjective perception of “worth.” Even if a properly conducted conjoint analysis might tell us how much consumers subjectively prefer the product as advertised over the product as sold, it can’t tell us how much the challenged advertisement actually caused the market price to move. As a quartet of experts in economics and marketing wrote, “it is important to remember that [subjective] consumer valuations of the misrepresented feature”—what conjoint analysis measures—“are not the same as the market price premium associated with the alleged misrepresentation.”
Back to the car example again: faced with two identical new cars, one lime green and another black, most consumers would be willing to pay less for the lime green car—probably by an amount equal to the cost of repainting it. A conjoint analysis, therefore, would find that black cars should sell at a “price premium” relative to lime green cars. But car companies price their new cars the same, regardless of paint color. Likewise, because consumers on the whole prefer their orange juice without pulp, a conjoint analysis would find a “price premium” associated with a “pulp-free” claim. But manufacturers charge the same for with-pulp and pulp-free versions of their juice products. In these cases, conjoint analysis would find a “price premium” that isn’t actually there.
As one of the leading authorities on conjoint methods, Bryan Orme, explains: “many factors influence market[s] … in the real world that cannot be captured through conjoint analysis.” Some courts have recognized this too. These other factors include manufacturers’ cost of goods, competitors’ pricing behavior, the manufacturer’s own product-positioning strategy, seasonality, stocking behavior, and certain sticky price “thresholds” that manufacturers don’t want to cross. Conjoint analysis does not consider any of these so-called “supply side” factors; as a result, it can’t be used to calculate how much the actual market price of the product was “inflated” by false advertising. (Theoretically, conjoint analysis could be used as the first step in a multi-step model that addresses the supply side, but the supply-side analysis is vastly complex, requires copious data, and has questionable reliability. Most plaintiffs’ experts simply ignore it.)
Third, consumer preferences vary significantly from one purchaser to the next. For example, leather seats in lieu of fabric seats might make most consumers willing to pay modestly more for a car, but for vegetarians and animal-rights activists, it might reduce consumers’ WTP to essentially zero. By the same token, the presence of “artificial flavors” may make many consumers less willing to pay for a food product, but it might make other consumers, who prize taste over “naturalness,” willing to pay more. When litigation experts use conjoint analysis to calculate a so-called “price premium,” they are essentially averaging out these subjective differences to determine the preferences of the “marginal” consumer. This average value is either too high or too low for almost every actual consumer—potentially by a wide margin.
This matters when a plaintiff is trying to recover damages in a false advertising class action. Members of a class are all supposed to have “have suffered the same injury,” but the individual-level data from conjoint studies routinely demonstrate that large swaths of consumers were quite happy with the product they purchased, notwithstanding the alleged false advertising. Indeed, the data may even show that a portion of the class prefers the product as it was actually sold (e.g., containing tasty artificial flavors) to the product as it was advertised (e.g., “natural” but bland). Consumers in this portion of the class are not “injured” in any concrete way, but a damages model based on conjoint analysis treats them as if they were. This sort of indiscriminate averaging also violates the Supreme Court’s admonition that “statistical evidence” may only be used in a class action if “each class member could have relied on” that same evidence “if he or she had brought an individual action.”
Fourth and finally, in part because of problems #1 , #2, and #3 above, the results of conjoint analyses can be unbelievable on their face. As conjoint expert Bryan Orme put it, “[e]ven when computed reasonably, the results often seem to defy commonly held beliefs about prices….” For example, in one recent case, the plaintiffs attacked four allegedly misleading statements on the label of an energy drink: (1) “Hydrates Like a Sports Drink”; (2) “Re-hydrate”; (3) “Consume Responsibly”; and (4) “an ideal combo of the right ingredients in the right proportion.” To calculate class-wide damages, the plaintiffs submitted a conjoint analysis purporting to find that, in consumers’ eyes, these four anodyne statements—standing alone—accounted for 81% of consumers’ willingness to pay for the drink. If true, this would mean that consumers “valued” these four label statements over four times more than the energy drink’s actual flavor, ingredients, branding, and energizing effect combined. The court dismissed this amazing finding as “incongruous,” especially because fewer than 10% of survey respondents even mentioned these claims when directly asked what was “important to” their purchasing decision. When a technique is capable of generating results this facially outlandish, one has to wonder how accurate a job it is doing when it generates results that are facially plausible.
* * *
When applied in market research—the use for which it was designed—conjoint analysis can be a worthwhile tool despite these problems. That is because market researchers generally use it to obtain directional information (e.g., “If we increase gas mileage by 10 mpg, but this requires a $10,000 price increase, then sales will likely go down”). For that purpose, a well-done conjoint study might do the trick. But market researchers worth their salt generally would not use conjoint analysis to obtain reliable quantitative information (e.g., “If we increase gas mileage by 10 mpg, and price goes up by $10,000, then sales will drop by 75,000 units”). Such a calculation could be performed—but for the reasons above, the resulting number would be largely meaningless. To quote Orme again:
Conjoint analysis can reveal product modifications that can increase market share, but it will probably not reveal how much actual market share will increase. Conjoint analysis can tell us that the market is more price sensitive for Brand A than Brand B, but we probably do not know the exact price sensitivity of either one. Conjoint analysis can identify which market segment will be most likely to purchase your client’s product, but probably not the exact number of units that will be purchased….
Do not let th[e] enthusiasm get out of hand…. While conjoint [models] are excellent tools for revealing strategic moves that can improve the success of a product, … we must avoid thinking that adjusted conjoint models can consistently and accurately predict volumetric [i.e., quantitative] absolutes…..
And there’s the rub. When it comes to damage models in litigation, reliable quantification is the whole point. Plaintiffs’ experts are not using conjoint analysis to predict which “strategic moves [will] improve the success of a product”; they are purporting to use it to pinpoint how much class members overpaid for a product in real dollars. That is not what conjoint analysis was intended to do—and thus, it is hardly surprising that conjoint analysis is not up to the task.
Despite these considerable problems, conjoint analysis in false advertising class actions won’t be going away anytime soon. While some courts have recognized its shortcomings, others have summarily approved its use because it looks scientific, or because it has a long and relatively uncontroversial history of use in the very different field of market research, or because other courts have also summarily approved it. Defendants faced with conjoint damages models in false advertising litigation should present these fundamental flaws to the court, and should urge the court to subject the method to the close, independent scrutiny that Daubert demands.