17 December 2022

Marketing Micro.Blog

TL;DR: Manton Reece (creator of Micro.blog), and Daniel Jakut (creator of MarsEdit, Mac software for posting to webblogs, including Micro.blog), recently discussed, on their long-running Core Intuition podcast, how Manton could better market Micro.blog and gain more users during the mass-exodus from Twitter that has followed Elon Musk's acquisition and mismanagement of the platform. My top advice is:
  1. Rely more on viral and (incentivized) referral marketing than on paid advertising.
  2. Go after authors and other (professional) writers, who seem like a perfect fit for the platform
  3. Create specific material for people thinking of switching from Twitter to Mastodon, explaining the relationship between Micro.blog, Mastodon, the Fediverse and Twitter, and emphasizing what's different about Micro.blog.
  4. Make the client software more obvious on the Micro.blog website; at the moment they're kind of buried in the Help pages.
  5. Before you even consider paid marketing, analyse your existing sign-up, conversion and usage data and get set up to do that consistently. If you don't have historical data, start capturing it immediately: history starts with your oldest data (as my friend, David Townsend, taught me).
  6. If you do paid advertising, consider not giving incentives as part of the promotion; if you do give discounts, be careful interpreting the results of the campaign.
  7. Be aware of negative effects in marketing (which, although relatively unlikely in your case, are more widespread than is commonly accepted). See

Background

Micro.blog is an excellent Twitter alternative (inter alia) created by Manton Reece, who is a deep thinker about social media, and has written book about Indie Microblogging. Micro.blog was launched in 2017, as both a blogging platform (with a particular, but non-exclusive focus on title-less, “microposts”, as well as full blog posts), and has expanded to include hosting podcasts, photos, “book shelves” and more.

Micro.blog is a first-class member of the Fediverse, in the sense it supports ActivityPub, meaning that you can follow and be followed by Mastodon users from Micro.blog. Manton deliberately eschews many of the Twitter features that Manton thinks have contributed to its becoming a toxic platform from many users. For example, it has no equivalent of retweeting, does not encourage use of hashtags, does not use an algorithmic timeline, does not show the numbers of followers and account has, does not carry advertising, and has active community management, with Jean MacDonald (@jean@micro.blog) employed as its community manager.

Micro.blog has an open API, allowing people to create accounts and post without paying, and also to link one or more external blogs to Micro.blog, but most users subscribe either to its $5/month hosted microblog plan, or its $10/month Micro.blog Premium plan, which includes podcasting, video, bookmarks and (sending) newsletters.

It's slightly ironic that I am not, in fact, a very active user of Micro.blog, for somewhat obscure reasons, but I am an enthusiast for the platform and have followed its development and growth since before the Kickstarter campaign Manton used to launch it. (The reason I don't use it much as is that I've been taking advantage of its open nature as a way to allow to experiment with hosting a microblog and blog on my own weird tag-based database, microdb, and because I don't work much on that, posting to it is actually a bit of palava for me. I also, already, have blogs on way too many platforms (e.g. this Scientific Marketer Blog on Blogger, the Test-Driven Data Analysis Blog on Github Pages (with Pelican), various List blogs and prose blogs on Eric Bower's fascinating list.sh and prose.sh, SSH/PKI-based platforms under the pico.sh umbrella. All of these effectively syndicate through my micro.blog/njr account.) (Pretty good that you call do all that, huh?)

Marketing Micro.Blog

On Episode 541 of Core Intuition, Manton and Daniel discussed how to market Micro.blog, and mentioned that their community manager, Jean MacDonald, always emphasizes that, if they do, they need to measure the impact (which is obviously right).

As someone who has worked a lot in marketing (not the adtech, surveillance capitalism kind, but the sort of direct marketing most businesses need to do), I have thoughts. Here I'll expand on the TL;DR above.

  1. Viral and (incentivized) referral marketing. Social media is viral by nature, and current (active) users are generally the best ambassadors for the platform. I would have thought that further encouraging members to promote the platform by giving them discounts (probably free months/credit on their plan) when someone else signs up for a paid account would be a highly efficient way of using the existing community. I suspect the incentive would not have to be a lot: a single extra month on whatever plan you're on if someone signs up using your code seems plenty.
  2. Go after authors and writers. There was some talk on the podcast about celebrities like Elton John, and Manton commented that there are no music-related features on the platform (though as Daniel keeps pointing out, it wouldn't take much to extend bookshelves to record shelves, film shelves etc.)

    But Micro.blog does have a special focus, it seems to me—writing and books. First, unlike Twitter and Mastodon, there is no limit to post length on Micro.blog. If you go past the maximum micro-post length (280 characters), only the first part is shown and you have to click a link to see the full post; and full blog posts with titles just show as links in the timeline. But despite the name, Manton absolutey sees Micro.blog as a full-blown blogging platform. (This confused me for ages, and the name still seems odd, to me, for a platform that supports micro-posts and full blog posts; but naming is hard.)

    More than this, Manton loves books, has built a special Bookshelves feature for tracking what you're reading. Manton has also added the ability to send out your posts as email newsletters, as well as publishing them to the web.

    It seems to me that the ideal people to attract to Micro.blog are writers. I don't know what the best way to do this is, but probably just reaching out to some (and probably offering free accounts, I guess) would be a great start. If a few (more?) novellists, poets, and essayists joined the platform, I would have thought that would both further raise its tone and potentially attract many of their readers.

  3. Create specific material for people thinking of switching from Twitter to the Fediverse. Unsurprisingly, there is a help article on Micro.blog called What’s the difference between Micro.blog and Twitter? That's great, and is needed more than ever. But there's not an article called What’s the difference between Micro.blog and Mastodon? I think creating one is important, ideally emphasizing many of same things as the article contrasting with Twitter does, but also more specifically highlighting the significant differences between Mastodon specifically, and the Fediverse more generally, while simultanously getting the message across that you can be a first-class participant in the Fediverse on Micro.blog.

    And of course, that help article shouldn't just be buried in the help: it should be a blog post, linked from the front page, and promoted to high heaven, especially to all the people now writing articles about Mastodon and the Fediverse. (And writers!)

  4. Highlight the client software more. Although it wasn't hard to find, I was slightly surprised, today, how long it took me to find the Mac client for Micro.blog, which I didn't have on the particular machine I'm writing on. It isn't in the Mac App store (which is fine), but it also isn't (I think) mentioned on the front page of the site, isn't very prominent in the help, and when you log in on the web, there isn't even an understated call to action suggesting you might want to try a native client.

    This is true on mobile too.

    While there's nothing wrong with using Micro.blog through a web browser, many (most?) people choose to interact with it, as with other social media platforms, through a client app. And there are lots. (Two of Manton's commendable attitudes are (1) that he is a huge enthusiast for the web in general, and the indieweb in particular, and (2) that he is deeply committed to letting a thousand client flowers bloom, and doesn't privilege the apps Micro.blog itself produces.)

    But it wouldn't hurt to let people know.

    It's not like they're trying to hide it. There's a great (anchorless) section of the Micro.blog help site highlighting (at the time of writing) nine apps you can use with the service. But you have to go looking.

  5. Measure before you market. As noted above, Jean apparently encourages Manton to put in place good measurement before embarking on (paid) marketing, which is obviously sound advice. As much of the content in other articles on this blog emphasize, these measurements are quite hard, but important.

    Possibly less obvious is that you need a good baseline set of measurements before you start. Micro.blog may already track sign-up rates (for trials), conversion rates, upgrade rates, retention (renewal rates)/churn etc., but I don't ever really remember hearing Manton talk about it on Core Intution, so possibly not. Even if no one at Micro.blog already does this, I would think there's a good possibility enough data is kept to be able to analyse many of these things retrospectively. I would strongly recommend building at least basic stats for these things and going back and graphing them over the whole history of the platform if possible. This will (a) be useful in itself and (b) set the company up for measuring marketing more effectively. (Needless to say, the measurement should be ongoing and ideally automated to produce some kind of report or dashboard, either on-demand or as something computed routinely.)

  6. Be careful with incentives. I am definitely not advising against paying for promotional marketing; but I am suggesting that there will probably be a better return on investment from all the things above for lower cost than any likely ROI on direct/promotional paid marketing. If Micro.blog does decide to do promotional marketing—for example, by sponsoring podcasts, as discussed on the show—there are two important things to bear in mind:
    • Even if you use a code to track where people came from, that does not prove that they are incremental sign-ups or sales. Of course, it's quite strong evidence (and even stronger evidence that they probably heard the promotion), but some people will (do!) use promo codes who would have bought anyway.
    • This is particularly true if the promotion comes with some kind of discount or incentive—a 30-day instead of a 10-day trial; 12 months for the price of 10; whatever. For example, if someone I know wants a cloud-based server, I will definitely direct them to Linode (because they're excellent), but I'll point them to one of podcasts with codes offering $100 free credit, because: well, why not? This is fine: I'm sure Linode is happy to give people the $100 to sign them up, just as I'm sure Manton is happy for them to get a discount. But it does affect the measurement of incrementality, and the bigger the incentive (and the $100 from Linode is huge), the more likely it is to affect assessment of incrementality (and, therefore, ROI).
    So if the purpose of the code is to track where users come from, a discount is double-edged: it makes it more likely people will use the code, but also makes the inference you can make from that use less strong. It also slightly biases things towards people who might not really want to pay the full price (though, to some extent, that's just human nature.)

    Needless to say, having good historical measurements allows you also to offset some of this, because if signups are fairly steady you should see the aggregate effect of promotional marketing.

  7. Be aware of negative effects. My final caution probably isn't highly salient for Micro.blog, but is important for all marketers to understand: marketing can, and does, have negative as well as positive effects. Unfortunately, there's an Upton Sinclair aspect to this:
    “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” — Upton Sinclair, I, Candidate for Governor: And How I Got Licked.

    Marketers tend to assume that the worst that can happen in a marketing campaign is that it costs money and has no impact. In fact, the truth is much harsher: it is perfectly possible to spend good money to drive customers away. In fact, it's common. Usually, there's a mix, with a few people being put off, and more people being turned on, so that there's a net benefit. It takes fairly inept marketing to generate a net negative effect in overall behaviour.

    One case, however, where this is extremely common is retention activity. I've written about this extensively, for example in this paper on churn reduction in mobile telephony. The basic gist is that people who leave are generally getting poor value for money, poor service, are not using the product or have had a bad experience: often they remain paying customers mostly out of apathy, laziness or because they don't realise they're out of contract. Calling them up and offering them the chance to lock in again often acts as a catalyst for cancellation. (“Really? I'm out of contract. Does that mean I can cancel right now?”) I'm not for a moment suggesting there are hoards of Micro.blog customers only paying subscriptions because they've forgotten about them or can't remember how to cancel. But you never know: there might be a couple!

Labels: , , , , ,

16 August 2013

The Misfit Shine: Points and Steps

Summary: As far as I can tell, 10 steps = 1 point.

UPDATE: As I didn't (but perhaps should have) known, the Shine App can also just tell you how many steps (it thinks) you have taken. If you tap in the big orange circle for the day:

Shine Steps Detail

it expands to show the number of steps it recorded and (based on the height an weight estimates you gave it) an estimate of the number of calories your exertions burned and the distance travelled.

Shine Day Summary

As you can see, it isn't exactly 10 steps = 1 points, but it's close (in this case 1686 points for 16422 steps). So I guess it's adjusting for something else. I'll do another post with a summary table

As discussed in my last piece, I got the beautiful Misfit Shine Activity Tracker and have been happily using it.

When you set it up, you need to choose a daily target number of points. You get points for movement, but it doesn’t really tell you anything about the scale. It suggests three levels, which (from memory) were 600, 1000 and 1600, and it described these with fuzzy terms that were something like “kinda active”, “active” and “super active”. I chose 1,000 points.

The obvious question is: how many steps is that, and how does it relate to the widely used recommendation that people do 10,000 steps a day (e.g. the UK National Health Service; according the Horizon episode Monitor Me, this is recognized standard).

Misfit Wearables don’t really tell you, so I thought I’d measure it. I did a short walks around the block three times, taking 1630 steps the first time and 1640 steps the second and third times. (I didn’t have a pedometer handy, and didn’t really want to compare one measurement error against another anyway, so I used a counter app, Tally Counter on the iPhone and counted every 10th step. I walked a few extra steps to make it a multiple of ten each time.) It doesn’t make any difference, but I know my typical stride length is just over a yard so this walk was around a mile (1680 yards). For the first circuit I wore the Shine using the sports band on my right wrist (I was also holding the phone in my right hand and tallying). The second time I used the magnetic clip on my shirt near the neck. The third time I clipped it onto the ticket pocket on my jeans. I synchonized the Shine immediately before going out and immediately upon return.

These are the results:

Circuit Start Points End Points Points Delta Steps
First (right wrist) 162 325 163 1630
Second (neck) 325 480 155 1640
Third (jeans ticket pocket) 504 643 139 1640

The first result seems very strongly to support my guess that they are simply using 1 point for every ten steps and seems to suggest that the Shine is very accurate at detecting steps both on the wrist and on the neck. The second and third ones are slightly off, but still close to that at 10.6 steps and 11.8 steps per point respectively. Obviously I don’t know whether I got lucky the first couple of times, or whether the wrist is a better location, but I still think this data suggests quite strongly that the Shine uses 10 steps = 1 point.

One thing the Shine doesn’t seem to have is a way to export the data (either in processed or raw form) from the phone. As a data analyst, I would definitely interested in getting some kind of data export so that I could look at other things myself. It would be great if Misfit were to add this at some point.

Labels: , , ,

07 August 2013

The Misfit Shine: A Review

A long time ago I backed a crowd-funded project on Indiegogo for the Misfit Shine “the world’s most elegant physical activity monitor”. It blasted through its funding goal and suffered delays, but about a month ago they asked for a shipping address, though I don’t recall receiving a shipping notice. Nevertheless, two days ago my doorbell rang and a man from FedEx handed me a packet containing a Shine.

ShineNaked800x650

And I love it.

The Shine is elegant, beautiful even. It is understated but playful. It looks like a small, grey pebble, until you tap it twice. Then one or more of its twelve, twinkling, pure-white LEDs will shine, telling you how much you have moved today, relative to your goal. After that, in a delightful, ingenious way, the lights tell the time, to five-minute accuracy, a small antidote to the second-precision punctuality that modern life and gadgets so often seem to demand.

For me, so far, everything about the Shine is perfect. To sync it to an iPhone, you download and launch the app and then place the Shine on the phone’s screen. Activity data uploads, while the Shine puts on a light show and ripples spread out on the iPhone’s screen. It’s simple, but satisfying. One synchronized, the app shows graphs of your movements, highlights notable achievements and summarizes how you’re doing, this week, relative to a points target that you can set.

Unlike many activity trackers, the Shine uses a replaceable battery that lasts 4–6 months, so no recharging is required and you can wear it at night if you want to track sleep. It is waterproof and rugged, so you can swim with it. It comes with a simple magnetic clasp that makes it very easy to attach to clothing, and there are various watch-strap and necklace attachments too. In a touch reminiscent of the special tool Apple provides for iPhone users to open their SIM slots, the Shine comes with an elegant dedicated, tool for opening the battery compartment. (You could use a screwdriver, but it all adds to the feeling that they’re not skimping, that everything should be perfect.)

ShineStrapped950x650

Oh: and as far as I can tell, it works. I don’t know how accurate it is, but the activity graphs look to match what I have been doing well, and the granularity of information is just right. Both days, so far, it’s encouraged me to move more, and it doesn't seem as if it's going to become a burdon.

I think Misfit Wearables has got just about everything right. I hope Shine becomes a massive hit.

So what’s this got to do with Scientific Marketing?

I didn’t post this with a view to its relevance to the usual themes of this blog; I just wanted to spread the word about my lovely new toy. But in fact, it’s not so far off topic.

The main focus of this blog is how marketing is used—well and badly, for good and for ill—to attempt to change people’s behaviour. Effective marketing campaigns cause people to do things (purchase, renew, stay, click, visit) that they would otherwise not have done. Proper campaign design, with appropriate use of control groups, allows measurement of the effectiveness of marketing in changing behaviour, while uplift modelling allows us to identify the people for whom a given campaign, action, or activity is likely to be most effective.

In a marketing context, one entity (the marketing organization) is trying to change the behaviour of another (typically a customer or a prospective customer). In the case of activity monitors, the two entities are the same: I wear a Shine with the goal of influencing my own behaviour. Like many others, I know that I am less active than I should be, and would like to get a little fitter. The raison d'être for activity monitors is to encourage us to move more, by providing feedback on how we’re doing and incentives to do more.

Two days in, with only myself as a test subject, there is clearly a limit to how much I can really say about the true effectiveness of the Shine. But I think it gets a lot right.

By being small and beautiful, and pleasing to interact with, it immediately encourages us to use it, to wear it and to interact with it.

By providing only coarse information (it can show only 12 different activity levels) it discourages obsession and constant checking every few minutes (which could easily be negative), but encourages periodic checking, which is helpful.

By including a rather elegant, minimal watch function, it gives another reason to interact with the Shine, giving activity feedback along the way. Additionally, my sense is that the implicit message of the 5-minute accuracy meshes perfectly with the big-picture message of Shine itself: don’t obsess about exactly what Shine’s points measure, just try to make sure you move enough to accumulate plenty each day.

By having a non-rechargable battery that lasts for months, and being sturdy and waterproof, it encourages wearing all the time, even at night, reducing the likelihood of finding yourself without it or breaking the habit of using it.

By making the iPhone app simple and minimalist, and making the sync process artificially pleasing, it encourages frequent interaction with the app, reinforcing progress (or lack thereof).

I think the people behind the Shine have pulled off something pretty amazing, and my prediction for myself is that I won’t abandon it any time soon, and it will prove a useful tool for changing my own behaviour.

Note:

My only connection with Misfit Wearables is that I backed their Indiegogo campaign and am the proud owner of a Shine. I would love them to succeed because I think they’ve made something excellent.

Labels: , , , , ,

22 October 2008

Pre-, Post-, Uplift

Question: When I'm measuring uplift, do I need to measure behaviour in both a “pre”-period and a “post”-period?

Answer: No. You only need a post-period.

I've been asked essentially this question twice in the last 10 days, first by a client and then by a blog reader (who knew they existed?). And as every investigator knows, two is a pattern, 1 2 so it seemed worth a blog post.

Discussion / Justification

In the absence of control groups, it is natural to investigate the impact of an action by measuring some behaviour before and after the intervention. For example, it is common for people to measure something like usage over a period of six weeks before and after a mailing, and then to attribute any observed change in behaviour to the mailing. Such an approach is exactly the one we tend use instinctively in ordinary life. But of course, the fundamental critique of this that leads us to consider control groups and uplift is that such an approach does not allow us to separate out the effect of our intervention (in this case, the mailing) from everything else that might cause behaviour to change.

I have written in many places that uplift is defined as, in the case of a binary outcome, R, such as purchase,

U = P (R | T) – P (R | C)

where U is the uplift, P is probability (or observed rate), T denotes treatment, C denotes non-treatment (control).

In the case of continuous outcomes, such as spend, S, we similarly have

U = E (S | T) – E (S | C)

where E denotes and expectation value

However, there is a temptation for people instead to define it as (in the binary case)

U = [P (R | T; post) – P (R | T; pre)] – [P (R | C; post) – P (R | C; pre)] (FLAWED!)

where “post” and “pre” denote measurements after and before the time at which the treatment is applied.

This is wrong.

In order for the control group to be valid, it must be drawn from the same population as the treated group. In practice this means that we must first form a candidate “eligible” population and then randomly allocate people from it to each of the treatment and control populations (though not necessarily in equal numbers). If this is true, then obviously the probability of the outcome of interest is by definition the same in the two populations before the treatment. That is not, of course, to say that if we measure it we will observe an identical rate, but if our control group is valid, any variation will be due to sampling error. (Indeed, for this reason, it is often a good idea to stratify the population on the basis of previous response behaviour, so that we do get identical values for P (R|T; pre) and P (R|C; pre), up to integer rounding effects.)

So the first thing to note is that if our sample is valid, the more complex formula reduces to the simpler one. To the extent that it does not, it is less accurate, by virtue of having added in extra noise through pointless sampling error. This, if you like, is the numerical objection to the more involved formula.

But there is an even more important philosophical objection to the more complex formula. For it conflates two quite different ideas. The first is the one we are interested in—the impact of our treatment. The other is the change in behaviour over time. And while it is true that our goal is typically (for example) to increase a purchase rate over time, that is emphastically not what we should be trying to measure here. The whole idea of marketing is to change behaviour relative to what it would have been without the intervention. If sales would have fallen without the intervention, but our intervention reduces that fall, then our intervention has had a positive impact. Of course, “the patient may still die”,3 and that is very much a matter of legitimate concern, but the goal of uplift is to measure the success of the treatment.

Footnotes

1As a theoretical physicist by training, I have to work hard to avoid falling into the trap of following Maier's Law, which states that “when the facts don't conform to the theory, they must be disposed of” (attributed to Maier, N. R. F. (1929), Reasoning in White Rats. Comp. Psy. Mono, 6 29, in Roeckelein, Jon E., Dictionary of Theories, Laws and Concepts in Psychology, Greenwood Publishing Group (Westport) 1998, as “if the data do not fit the theory, the data must be disposed of”.) I remember a colleague, after much work, producing a single data point and proudly showing me a graph containing said observation and the theoretical behaviour. What was striking was that the observation was nowhere near the curve. This didn't perturb my colleague unduly, leading to the observation that it takes a theoretical physicist to fail to fit a curve through even a single data point!

2It was of course Oscar Wilde, who wrote, “To lose one parent, Mr Worthing, may be regarded as a misfortune; to lose both looks like carelessness.” (The Importance of Being Earnest, Oscar Wilde). While theoreticians are often guilty of playing too fast and loose with the data, it's depressing how many non-scientists (and, for that matter, scientists!) are impressed beyond reason when they manage to fit a straight line through two data points.

3 “The treatment was a complete success; unfortunately, the patient died.” I am distressed to be unable to find a source for this aphorism; if anyone has a reliable source, please do let me know.

Labels: ,

04 May 2008

Bastard

26 February 2007

Uplift and Lift: Not the Same Thing at All

A lot of this blog is about Uplift. You may wonder why I use an ugly, slightly redundant term like uplift when marketers normally talk simply about lift. The reason is that they are completely different.

Lift is a measure of targeting effectiveness that tells you how much better than random your targeting is. The idea is simple. Suppose you get a 1% purchase rate if you target randomly, and a 3% purchase rate if you target only segment A. Then the lift is 3% / 1% = 3. If you have a score (say, purchase probability) then you can plot lift as a function of volume targeted (starting with the best customers), and it will look something like this (assuming your score allows better-than-random targeting):

A lift curve, which in this case gently declines from about 3.7 when targeting 5% of the population, to 1.0 (where it must end) for 100% targeting.   Also shown is a flat, horizontal line at 1.0, representing the lift for random targeting.

Lift curves show the same information as Gains Charts (with which they are often confused) but display it in a different way. And they always end at 1 because of course if you target the whole population, you get the same response rate as if you target randomly (on average).

In contrast, uplift directly measures the (additive) difference between the expected outcome in a treated group and control group. For a binary outcome, such as purchase, this is

P (purchase | treatment) – P (purchase | no treatment).

For a continuous outcome, such as spend, this is

E (spend | treatment) – E (spend | no treatment).

So in summary,

  • Lift measures the effectiveness of targeting. It quantifies how much better the outcomes are in a target group than they would be in a randomly chosen group (multiplicatively: 2 = twice as good).
  • Uplift quantifies the effectiveness of treatment. It measures the difference in outcome between a treated group and an equivalent non-treated group (additively: 10% uplift means that if a person's probability of purchase without treatment is 5%, with treatment it's 15%.) Strictly, uplift for binary outcomes is measured in percentage points, while for continuous outcomes, e.g. incremental spend, uplift is simply the difference in expected spend between the treated and non-treated group.

Labels: , , ,

23 February 2007

Response Model

12 February 2007

The Many Parents of Success

Sales Report.   Jan.   Feb.   So what made sales jump?   My Ads.   My mail.   My displays.   My killer stock.   Sales Report.   Feb.   Mar.   So what made sales fall.   Jo’s displays.   Kay’s ads.   Ravi’s stock problems.   Lou’s email.

Labels: ,

08 February 2007

Modelling that which is Important

Former US Secretary of Defense Robert McNamara reportedly said:

"We have to find a way of making the important measurable, instead of making the measurable important."

I heard this one morning on Radio 4's "Thought for the Day", and have never tracked down a reliable source, but I have probably quoted this more than almost anything else. I think it is a remarkable, and important, observation.

As marketers, we must strive to model that which is important, rather than making important that which we can conveniently model.

Traditional so-called "response" models do not model response all. They most commonly model the (conditional) probability of purchase, given treatment:

P (purchase | treatment)

But this isn't what affects the return on (marketing) spend. That is affected by the change in purchase probability resulting form a treatment:

P (purchase | treatment) - P (purchase | no treatment).

That is what we model if we want to target so as to maximize expected ROI.

Such models go by various names. Portrait Software (for whom I work), calls them uplift models, and used to call them differential response models. Others call them various incremental impact models, net response models, true response models, true lift models, and various other combinations of these, and other words. But they are all the same.

Uplift models predict that which is important. Traditional "response" models make important that which is easy to model.

Footnote: If you haven't seen Errol Morris's biography of McNamara, The Fog of War: Eleven Lessons of Robert S. McNamara, consider doing so. It's extraordinary. Frightening, compelling, and revealing.

Labels: , ,