When Rich Johnson, founder of Sawtooth Software, introduced Adaptive Conjoint Analysis (ACA) to the marketing research community in 1985, he almost single-handedly birthed an industry. Academics and practitioners alike embraced his innovative approach to conjoint because it was both elegant and practical. It was clever, intuitively reasonable and so easy to use even a PhD couldn’t mess it up. But I wasn’t sold on ACA then and I’m not sold on adaptive conjoint methods in general now.
ACA was the first adaptive technique but certainly not the last. Adaptive Choice-based Conjoint (ACBC) and Adaptive Self-Explication (ASE) are two fairly recent entries. But adaptive approaches are generally based on the intuitively pleasing concept that the conjoint exercise can be made more efficient by eliminating or, at least, de-emphasizing attributes that are unimportant. And adaptive methods generally identify important and unimportant attributes by asking the respondent directly. On the surface, this makes perfect sense. Why waste time asking respondents to choose products that we already know they don’t like?
Well, for starters, people don’t always know what is important to them, they don’t always want to tell you even when they do know and, most importantly, they’re consistently inconsistent. That is, what a respondent claims to be important before the conjoint (the adaptive part) and how he responds to real product choices in the conjoint may not be consistent with one another. And that inconsistency, which is so problematic to these adaptive approaches, is actually the key to conjoint analysis being successful.
Have you ever realized that choice-based conjoint studies, as well as max/diff studies, routinely perform mathematical miracles? The miracle is a statistical alchemy that turns the lead of choice data, ie, binary data, into the gold of metric data. How can data with low level information content be converted into data with much higher information content? That’s like converting candle wax into jet fuel. Yay, the energy crisis is over! Well, not so fast.
In an article I wrote in 2010, “Is MaxDiff Really All That?”, I explore how a bunch of humble zeros and ones can legitimately be transformed into a continuous distribution of gloriously metric information. The key to this magical upgrade in data quality is, ironically, the clever application of respondent mistakes. Yes, it might surprise you to know, respondents do make mistakes, at least if you give them a chance. And how they make mistakes, or more specifically, how often they make mistakes is the special sauce that allows us to turn binary data into something much more useful.
Let me give you an example. Let’s say George is our respondent. And George is choosing his most preferred brand in a max/diff exercise. Let’s also say George loves Brand A a ton but his second most preferred brand, Brand B, he’s just luke warm towards. Now, Brand C, his third favorite brand, he likes almost as much as Brand B. So when George is facing multiple max/diff tasks that involve Brands A, B and C, he is much more likely to make a “mistake” and pick Brand C instead of Brand B but he’s not nearly as likely to pick Brand B instead of Brand A. This different level of error when picking between Brand A and B vis-a-vis when picking between Brand B and C is what allows the mathemagician to estimate max/diff utilities that reflect the bigger gap in preference between Brands A and B.
So now back to our adaptive conjoint. By eliminating attributes that we know (or at least think we know-more on this in a minute) are unimportant, we are eliminating the opportunity for the respondent to make errors, a fundamental necessity for estimating utilities using choice data. Yes, we have more time to make errors with attributes we think are important so we should improve our accuracy with those utilities but we weaken our ability to estimate all attributes accurately.
Even so, what makes us think we can reliably know which attributes are important to George? Humans are inconsistent, even accountants (and our George is head gardener at a psilocybin mushroom farm). But if you ask a human a question, he/she is will give you an answer. That’s one of the scary things about surveys. So ask one which attributes are important and he/she will tell you. If you eliminate those attributes from further choice tasks, you may be missing something really important. Because, as we just learned, respondents make mistakes.
But respondents can also process fairly complex relationships, too. An attribute that may not be important, or conversely very important, in isolation may be viewed differently in the context of a complete product profile. For example, I may think that I absolutely must have a 600 GB hard drive in any laptop in my consideration set. Then I’m confronted with a 256 GB hard drive that’s solid state and half the price. All of a sudden, that 600 GB drive doesn’t seem so “have to have.” These interactions between attributes are difficult, if not impossible, to anticipate.
One of the problems with adaptive methods is that, in the attempt to gain efficiency, we are forced to make the assumption that the respondent can and will tell us which attributes are important and that he would make product choices consistent with these claimed importances if shown all attributes. Attribute interaction effects and respondent inconsistencies make this assumption difficult to accept.
In a paper I published several years ago, I asked respondents to tell me which levels of an attribute they preferred most and which they preferred least. I just wanted to know which levels were the two extremes: most and least preferred. Afterwards, I ran a traditional conjoint exercise where I estimated utilities for all attribute levels. Any idea how many times the respondent’s claimed extreme attribute levels agreed with the conjoint utilities? About 60%. That means 40% of the sample couldn’t (or wouldn’t) accurately identify the most and least preferred levels. I don’t have a lot of confidence in respondents telling me what is important. After all, isn’t that exactly the reason we do conjoint analysis in the first place? So we don’t have to rely on direct questions?
Recent papers seem to show that some adaptive methods do as well as traditional CBC (although not necessarily better). But the theoretical concerns raised here may point to future problems. ACA was enthusiastically received initially. It wasn’t until several years of extensive use that it was discovered that ACA potentially underestimates price sensitivity by a factor of 2 or 3.
If my concerns about the theoretical justification of adaptive methods are correct, I predict we will find similar difficulties with currently popular adaptive methods once they have been implemented in a wide variety of situations.
0 Comments