27 September 2007

Uplift Modelling FAQ

[This is a post that will be updated periodically as more FAQs are added, so if you subscribe to the feed, it may keep re-appearing.]

  1. Q. What is Uplift Modelling?
    A. Uplift modelling is a way of predicting the difference that an action makes to the behaviour of someone. Typically, it is used to predict the change in purchase probability, attrition probability, spend level or risk that results from a marketing action such as sending a piece of mail, making a call to someone, or changing some aspect of the service that the customer receives.
  2. Q. Uplift Modelling sounds like Response Modelling. How is it different?
    A. Ordinary "response" modelling actually doesn't model a change in behaviour (even though it sounds as if it should): it models the behaviour of someone who is subject to some influence. Uplift models instead model the change in behaviour that results when someone is subject to an influence—typically, how much more that person spends, how much less likely (s)he is to leave etc.
    Mathematically, a response model predicts something like

    P (purchase | treatment)

    ("the probability of purchase given some specific treatment", such as a mailing), whereas an uplift model predicts

    P (purchase | treatment) – P (purchase | no treatment)

    ("the difference between the probability of purchase given some specific treatment and the corresponding probability if the customer is not subject to that treatment").
  3. Q. Uplift modelling sounds like Voodoo. How can it possibly know the change in behaviour of a single individual?
    A. Uplift modelling can't know the change in behaviour for any individual, any more than a normal model can know the behaviour of an individual in a future. But it can predict it. It does this by looking at two groups of people, one of which was subject to the marketing action in question, and the other of which was not (a control group). Just as it is standard to measure the incrementality of a campaign by looking at the overall difference in purchase rate between the treated group and an otherwise equivalent control group, uplift modelling models the difference in behaviour between these two groups, finding patterns in the variation.
  4. Q. Does Uplift Modelling Really Work?
    A. Uplift modelling can work, and has been proven to do so with in-market tests. Uplift models are harder to build than conventional models, because they predict a second-order effect—usually the difference between two probabilities. This means that the error bars tend to be larger than for conventional models, and sometimes there is simply not enough signal for current techniques to model accurately. This is especially true when, as if often the case, the control group is small.
  5. Q. When does uplift modelling predict different things from non-uplift models?
    A. It's perhaps easier to say when they predict the same thing. This is usually when there is essentially no behaviour in the control group. For example, if a set of people purchase product X after a mailing, but no one purchases it without the mailing, and uplift model should predict the same thing as a conventional response model. Their predictions are most different when the variation in the change in behaviour opposite from the variation in the underlying behaviour. For example, suppose the background purchase pattern (the one you see if you don't do anything) is that mostly men by product X, but the effect of a marketing action is to make more women buy it, but fewer men, even though still more men than women buy when treated. In this case, uplift models will make radically different different predictions from "response" models. A response model will concentrate on the fact that more men buy (when treated) that women; but an uplift model will recognize that women's purchases are increased by the treatment whereas men's is suppressed.
  6. Q. How do you measure the quality of an uplift model?
    A. Standard quality measures for models (such as gini, R-square, classification error etc.) don't work for uplift models as they are all based on comparing an actual, known outcome for an individual with a predicted outcome. However, since a single person can't be simultaneously treated and not-treated, we can't make this comparison.
    There is, however, a generalization of the gini measure called Qini that has some of the characteristics as gini, but which does apply to uplift models. This has been described in the paper referenced as [1].
  7. Q. What are the main application so of uplift modelling?
    A. So far the biggest successes with uplift modelling have been in the areas of customer retention and demand generation (cross-sell and up-sell, particularly).
    The state-of-the-art approach to customer retention is to predict which customers are at risk of attrition (or "churn") and then to target those at high risk who are also of high value with some retention activity. Unfortunately, such retention efforts quite often backfire, triggering the very attrition they were intended to save. Uplift models can be used to identify the people who can be saved by the retention activity. There's often a triple win, because you reduce triggered attrition (thus increasing overall retention), reduce the volume targeted (and thus save money) and reduce the dissatisfaction generated by those who don't react well to retention activity.
    The other big successes have come in the area of cross-sell and up-sell, particularly of high-value financial products. Here, purchase rates are often low, and the overall incremental impact of campaigns is often small. Uplift modelling often allows dramatic reduction in the volumes targeted while losing virtually no sales. In some case, where negative effects are present, incremental sales actually increase despite a lower targeting volume.
  8. Q. Are there any disadvantages of uplift modelling?
    A. Uplift modelling is harder and requires valid controls groups to be kept, which have to be of reasonable size. Experience shows that it is also easy to misinterpret the results of campaigns when assessing uplift, especially when it is first adopted. Adoption of uplift models usually results in reductions in contact volumes, which is sometimes seen as a negative by marketing departments. An uplift modelling perspective also often reveals that previous targeting has been poor, and sometimes brings to light negative effects that had not previously been identified.
    There is also some evidence that uplift models also seem to need to be refreshed more frequently than conventional models, and there are clearly cases where either data volumes are not adequate to support uplift modelling or where the results of uplift modelling are not significantly different from those of conventional modelling. Anecdotally, this seems to be the case in the retail sector more than in financial services and communications.
  9. Q. How does uplift modelling relate to incremental modelling?
    A. It's the same thing. Various people have apparently independently come up with the idea of modelling uplift, and different statistical approaches to it. There is no broad agreement on terminology yet. Names include
    • uplift modelling
    • differential response analysis
    • incremental modelling
    • incremental impact modelling
    • true response modelling
    • true lift modelling
    • proportional hazards modelling
    • net modelling.
    These are all the essentially the same thing.


[1] Using Control Groups to Target on Predicted Lift: Building and Assessing Uplift Models, Nicholas J. Radcliffe, Direct Marketing Journal, Direct Marketing Association Analytics Council, pp. 14–21, 2007.

Labels: , , , , ,

Rate Plan

07 September 2007

41 Timeless Ways to Screw Up Direct Marketing

Screw Up the Control Groups

  1. Don't keep a control group
  2. Make the control group so small as to be useless
  3. Raid the controls to make up campaign numbers
  4. Use deadbeats as controls
  5. Don't make the controls random
  6. Use Treatment Controls but not Targeting Controls

Alienate or Annoy the Customer

  1. Trigger defection with retention activity
  2. Use intrusive contact mechanisms
  3. Use inappropriate or niche creative content
  4. Insult or patronize the Customer
  5. Mislead or disappint the Customer
  6. Fail to Listen to the Customer
    • Bonus marks for refusing to take "no" for an answer
    • Double bonus marks for calling the customer back after she's said "No thanks, I'm not interested" and hung up.
  7. Overcommunicate
  8. Make it hard for the Customer to do what you want.
    • (Overloaded websites, uncooperative web forms, understaffed call centres, uninformed staff and failure to carry stock to meet demand are but a few of the ways to achieve this).
  9. Intrude on the Customer's Privacy
  10. (Over)exploit the customer

Misinterpret the Data

  1. Confuse "responses" with "incremental responses" (uplift)
  2. Confuse revenue with net revenue
  3. Take Credit for That Which Would have Happened Anyway
  4. Double Count
  5. Believe the name of a cluster means something
  6. Believe the data
    • The fact that the computer says it's true and prints it prettily, doesn't mean it is.
  7. Disbelieve the data
    • The fact that the data doesn't show what you hoped, thought or expected doesn't mean it isn't so.

Screw Up Campaign Execution

  1. Discard response information.
  2. Revel in unintended discounts and incentives
  3. Use dangerous, insulting or disrespectful labels
    • Yes, I really have known an airline label its bottom-tier frequent fliers "scum class", and a retailer label a segment of its customer "vindaloonies".
  4. Ship mailings with the internal fields filled in instead of external ones (see also 26)
    • Dear ScumClass, . . .
  5. Direct people to slow, hard-to-navigate websites, IVR Systems or overloaded or poorly designed call centres
  6. Fail to inform your front-line staff about offers you're sending to customers
  7. Fail to ensure your company can fulfill what the marketing promises
  8. Use a name that's very visually similar to a better known name that has negative connotations for many people.
    • I frequently get mail from Communisis, but I never read their name correctly.

Mismodel or Misplan the Campaign:

  1. Confuse statistical accuracy/excellence with ability to make money
  2. Model the wrong thing
  3. Use undirected modelling instead of directed modelling
  4. Assume equivalence of campaigns when important things have changed.
  5. Ignore external factors (seasonality, competitor behaviour etc.)
  6. Fail to make dates relative
  7. Contaminate the training data with validation data
  8. Screw up the Observation Window (Predict the Past, not the Future)
  9. Ignore changes in the meaning of data
  10. Fail to sanity check anything and everything

Labels: , , , , ,

03 September 2007


OK, so you didn’t record the responses.   But we know who we mailed, right?   So we’re home free.   We can build uplift models to figure out patterns in the change in behaviour against the controls!   That’s even better!   Controls?

Labels: , ,