Contacter nous

The Logit-Cake Method©

A Proprietary Hybrid Choice-Based Approach to Trade-Off

The Logit-Cake Method© is a unique, proprietary approach to choice-based trade-off analysis, developed by MACRO Consulting, Inc., which offers several advantages over other trade-off methods:

  • A large number of product features (50 or more) can be included in the model
  • The heterogeneity problem long associated with aggregate logit models is avoided
  • The traditional advantages of logit models over conjoint models are maintained
  • First order interactions can be estimated
  • There is complete control over the experimental design, in a full-profile format
  • Since product combinations are specified, via traditional experimental design, before the interview takes place, physical exhibits can be easily incorporated into the interview
  • Probability models are recalibrated against holdout choices to increase internal consistency and forecast accuracy

The approach involves a specific data collection procedure as well as a unique analytic protocol. The basic steps of the procedure are as follows:

Data Collection

  • The data collection procedure has three sections:
    • Product feature importance ratings
    • Trade-off exercise
    • Holdout card
  • In the product feature importance ratings section, respondents are asked to rate each of a list of product features for purchase interest. Several of the features included in the importance ratings will be included in the choice exercise as well.
  • The respondents then participate in a “full-profile” choice-based trade-off exercise. Respondents are typically shown a glossary of terms to review prior to both the importance ratings and the trade-off exercise to be certain they understand all of the attributes tested.
  • The full-profile products consist of six attributes , all of which are included in the importance ratings above. These products are either chosen from a set (classic choice) or rank-ordered (exploding data choice).
  • Several holdout tasks, consisting of products similar to those in the choice exercise, are shown to respondents. They are asked, for each task, to choose one or none of the products shown, as the one they would buy if those shown were all that were available to them.

Analysis

Define homogeneous respondent segments

  • Using respondent ratings from data step 1 as product feature partworths, the total sample can be segmented, via standard cluster analysis, into subgroups which are homogeneous with respect to the product features rated in data step 1.

Estimate utilities in choice exercise (data step 2)

  • Using any of a variety of available choice software, utility weights for each feature in the choice exercise (data step 2) can be estimated at the “semi-aggregate” level of respondent segment.

Bridge utilities from data step 1 with data step 2

  • On a per respondent segment basis, a scalar can be estimated using the common features in data step 1 and data step 2. The formula used to estimate the scalar equals the sum of the utility weights of the common features in data step 2 divided by the sum of the utility weights of the common features in data step 1. The formula for the scalar is as follows:

(X11 + X12 + X13)/( X21 + X22 + X23)
where Xij= the utility weight of the jth feature in the ith trade-off

  •  The scalar reduces the feature scores in data step 1 to a size equivalent with data step 2 utility weights.
  • On a per respondent segment basis, this scalar is multiplied by each score in data step 1 to achieve utility weights comparable to data step 2 utility weights.
  • Data step 1 and data step 2 utility weights are then merged to create one set of bridged utility weights (with the utility values from data step 2 used for the attributes common to both steps).
  • These bridged utility weights define the choice models from which all subsequent simulations will be based.
  • This bridging is done separately for each respondent segment.

 Calculate feature importance

  • Utility ranges for each feature can be calculated by subtracting the minimum utility value of a feature level from the maximum utility value.
  • Data  step 1 feature ranges are scaled using a similar scalar formula as the formula used to bridge the utility weights:

(X11 + X12 + X13)/( X21 + X22 + X23)
where Xij= the utility weight of the jth feature in the ith trade-off

  • Data step 1 scaled utility ranges and Data step 2 ranges are combined to form one set of feature ranges.

Correct for excessive feature bias

  • When selecting products, respondents are commonly believed to comprehend up to no more than six features at a time. The following step can be performed to eliminate some of the bias associated with too many features in the importance calculations.
  • For each respondent segment, the six features with the largest utility ranges are selected while the remaining features’ utility ranges are set to zero for that respondent segment.
  • Aggregate mean utility ranges for each feature are then calculated using the transformed utility ranges from the step above.
  • Mean ranges are standardized by summing across all ranges and then dividing that sum into each range to express each range as a percent of the sum of ranges.

Construct purchase probability model

  • For each respondent segment, the initial probability model is defined to be the standard choice model:

Pij = (eijax +b)/(eklax+b)

  • The actual probability for each product i in each holdout task j is the number of times product i was chosen from task j divided by the number of respondents who were given task j (sample size of respondent segment k):

P’ij = fij/nk

  • The predicted probability, Pij, is the initial probability defined above.
  • The holdout tasks can be used to further refine and calibrate the initial probability model. If actual probability, P’ij, is regressed against predicted probability, Pij, the resulting model:

P’ = m P + c

allows us to adjust our initial model in the following way:

P”ij = m((eijax +b)/(eklax+b)) +c

  • Because our probabilities are bound by the constraint that:

Pij = 1

we need to rescale the above modified probability model to its final form:

Pij = (m((eijax +b)/(eklax+b)) +c)/
(m((eijax +b)/(eklax+b)) +c)

Or more simply:

Pij = P”ij / P”kl

Note that, based on the purchase probability model, unit sales and gross revenue forecasts can be made for any product configuration definable. See the MACRO white paper Forecasting New Product Sales for more specific information.

Often product developers need to evaluate a large number of product features, measure some interaction terms, e.g., brand and price or a multidimensional pricing structure, and express the product concepts in some realistic, full-profile format. The Logit-Cake Method© offers a unique, non-linear cost and time efficient solution to those requirements.