I would like to thank Larry Gibson for his provocative article, What’s Wrong With Conjoint Analysis? Larry makes a valid point, namely, that self-explicated scaling is largely overlooked even though it appears to have valuable potential as a research tool. However, Larry, in his zealous evangelizing of the merits of self-explicated scaling, has frequently and incorrectly criticized conjoint analysis. Further, Larry completely ignores the idea that self-explicated scaling might also have methodological limitations.
Fiedler reported, as have several others, that ACA, in some circumstances, underestimates price sensitivity. ACA, the first commercially available conjoint software package, and, indeed, the primary reason for the widespread use of conjoint analysis today, relies heavily on self-explicated scales. To my knowledge, price sensitivity underestimation has never been shown to systematically exist for any conjoint analysis technique other than ACA.
It is well known in the conjoint community that full-profile designs are limited to a fairly small number of attributes, ideally six, although practitioners regularly exceed that number. I agree with Larry that this is a limitation. Certainly, there are situations where one would like to accommodate a larger number of attributes in the design. Larry observes that there are only two alternatives to this problem: hybrid models that employ self-explicated scales and “more complex designs and highly sophisticated mathematical procedures.”
He goes on to claim that if one uses hybrid models, “the rationale for using the trade-off process in the first place is unclear.” It is not unclear if you recognize the limitations of self-explicated scales. Hybrid models attempt to marry the best of both techniques, while minimizing the weaknesses of each.
I can only assume that by his second alternative of “complex designs” he refers to partial profile studies. If so, then once again he has overstated his case with the claim that “they make only a modest difference in the basic capacity problem.” Partial profile designs, championed by Keith Chrzan and others, have been successfully used with as many as 50 attributes.
Larry also charges that “far fewer attributes and levels can be studied in any conjoint analysis than clients would like or than logic would suggest.” It is simply not always the case that the client wants to examine more than six attributes. Both prior knowledge of the market and research objectives may dictate a small set of relevant attributes. I have had numerous studies where the relevant attribute set was easily accommodated in a full profile design.
Larry further claims that “lost opportunities are the inevitable result of conjoint’s incomplete maps and unrealistic model.” To say that lost opportunities are inevitable is to assume that every research issue is too large and too complex to be modeled using conjoint analysis. This is simply not so. The degree to which opportunities are missed is much more a function of the competence and diligence of the researcher than the method employed.
Finally, Larry asserts that potential product benefits, such as “whitens teeth”, can seldom be included in conjoint studies and can only be included in a self-explicated study. Much of my practice is new product work. I include potential product features and/or benefits in almost every study I do. There is no attribute that can be included in a self-explicated study that cannot also be included in a conjoint study.
The ultimate proof of the methodological pudding is in the method’s ability to accurately address the research objectives of that particular study. The most demanding research objective, in trade-off studies, is the prediction of sales in the marketplace. There are numerous papers touting the effectiveness of conjoint analysis in accurately modeling market behavior as represented by sales at retail, including “Forecasting Demand,” Marketing Research (Summer 1994) by Jonathan Weiner. There are many others.
Larry cites the paper by Green and Srinivasan (1990). It is an excellent survey and assessment of methods current in 1990. Larry apparently skipped the section where Green and Srinivasan discuss the limitations of self-explicated scales (I quote verbatim from the Green and Srinivasan article):
- “If there is substantial intercorrelation between attributes, it is difficult for the respondent to provide ratings for the levels of an attribute holding all else equal.”
- “Increased biases may result from direct questioning of the importance of socially sensitive factors.”
- “(Another) … problem is that in the self-explication approach one assumes the additive part-worth model to be the literal truth.”
- “(Another) … problem … is that any redundancy in the attributes can lead to double counting.”
- “(Another) … problem is that when the attribute is quantitative, the relative desirability ratings may become more linear.”
- “(Another) … problem occurs if the data collection is limited solely to the self-explication approach. … This limitation can be serious in new product contexts in which the researcher uses a simulator to obtain average purchase likelihoods under alternative product formulations.”
Self-explicated scaling has shown potential in the literature. Dr. Srinivasan is to be commended for his pioneering work in this field. I’m sure Eric Marder Associates has had excellent results with their SUMM approach. I agree completely with Larry that self-explicated scaling deserves a closer look. But all methods have limitations. It is precisely because self-explicated scaling has largely been ignored that we are uncertain of its limitations, as well as its potential uses. More research needs to be done to clarify when self-explicated scales are appropriate and when they are not. Thank you, Mr. Gibson, for a very stimulating, if not accurate, point of view.