With digital marketing, the data to evaluate seems nearly endless. But all of that information brings the real challenge of identifying the metrics we should care about and the best way to measure them.
Brands are in the metaphorical seats of a great ROI assessment boxing match: a fight between attribution–focused on assigning credit for sales–and causal measurement–focused on measuring sales actually caused by a program. Sound the same? Nope. They’re as different as Tyson and Ali.
When digital marketing measurement began, the data available and how it could be collected were limited by available technology. Thanks to technological advances, that’s no longer true. Unfortunately, many brands haven’t caught up. The most common approaches to performance assessment today have the following flaws:
- They focus primarily on justifying past actions rather than determining the best courses for the future.
- They count easy-to-measure metrics, such as clicks and visits rather than sales effects.
- They assess online impacts only versus impacts across all channels.
- They focus on assigning credit for sales to tactics according to predetermined percentages, rather than determining which sales were actually caused by the marketing program.
In This Corner: Attribution Modeling
Consider a simple approach to performance assessment: last-click attribution, for which the credit for a sale goes to the last marketing touch point. Last click is easy to measure and allows digital marketers to puff out their chests because all credit for sales goes to digital.
Last click is a good strategy for measuring some tactics, such as affiliate marketing. But in a cross-channel, cross-device world, does last click really make sense for most of what you’re doing? Does the final Google search for a store nearby, for example, deserve all of the credit for driving conversion? If I spend two hours consuming content to evaluate alternatives on my phone, but switch to my PC to buy, does the last PC impression deserve the credit when it had nothing to do with my decision?
The problem is that too many people are worrying about who gets credit for a sale instead of figuring out what actually caused the sale. Attribution modeling–wait, let me correct that–bad attribution modeling focuses on a “rules-based” method of crediting a particular marketing program with a sale. Most rules-based attribution platforms have more complicated credit formulas than last click. A common approach involves giving one-third credit for a sale to the first marketing touch point, one-third divided among all of the interim touch points, and one-third for the last touch point. Others have sophisticated black-box regression models allocating credit. They seem simple, but not as “simplistic” as last-click.
Simple attribution is the “showman” boxer with heaps of bravado. People bet on him because he seems like a winner. But here’s the thing: The flashiest boxers often lose, regardless of their magnetism, because boxing is about the fight, not which guy has the better publicity team. Similarly, allocating credit via a formula is much easier than determining what actually caused a sale. But doing so doesn’t get you any closer to knowing what ROI your programs are driving.
And In This Corner: Causality Measurement
Fortunately, another kind of boxer is out there to bet on–a fighter with a three-punch combination of accuracy, actionability, and comprehensiveness. We call that boxer “causal measurement.” It’s grappling strategy focuses on what incremental revenue a program actually caused (causation), rather than what sales happened at the same time (correlation).
Why is this so important? One common situation would be crediting a retargeting campaign with credit for any sale it touched, even though many of those sales would have happened anyway, without advertising. Our research has shown that as many as 80-plus percent of the sales that correlate to some retargeting campaigns would have occurred anyway. That doesn’t discredit retargeting as a strategy–it may still be very efficient, just not as efficient as it appears to be at first glance.
Further, if we erroneously conclude that retargeting drive eight times more sales than it actually caused, we might decide to pour money into it, relentlessly blasting banners at the relatively small number of people who visit our site–thus surpassing the point of diminishing return and squandering resources instead of reaching out to other potential buyers.
Several methods exist to understand causality. Some companies use algorithmic attribution models to precisely connect the dots between a given tactic and the results it caused. The cleanest approach to causality, in my view, is to conduct scientifically based A/B testing using matched samples. One group sees the marketing messages while another sees control cell ads (e.g., public service announcement ads) instead. By calculating the sales made to the anonymized IDs in each group and then subtracting the “control” cell sales from the “test” cell sales, you can get a precise measure of the sales that were actually caused by the program.
This method may not be flashy, but it has a perfect win/loss record.
For the brands in the stands, the prize fight between attribution and causal measurement is still in the early stages. The betting window is still open. One boxer looks and talks big and focuses on the short-term win. The other is a lot quieter, but focuses on his fundamentals and is prepared to go the distance.
Where are you putting your money?