Attribution modeling can be a murky science. How can marketers truly prove that the engagement they’re getting in social media, or even the television commercial that ran last week, drove a customer to buy a product in-store?
That got us thinking: Is attribution modeling all that accurate? How much is guesswork? And how much is actual “science”? We reached out to agency folks for insight. Here’s what they said.
Jonathan Haynes, Director of Data Sciences at Starcom USA, told CMO.com:
It depends on three factors. The first is what one means by attribution, then there’s the attribution methodology, and last is the quality and completeness of the data going into a model. The short answer is that attribution–when done correctly–is a science. Therefore, it is not guesswork, can be highly accurate, and is often necessary.
Attribution must, at minimum, assign credit to media placements for their contribution to conversions in a way that accurately captures the individual contribution of each touch point to the conversion event. We believe it is best to reserve the term attribution for analyses involving individual path-to-conversion data, though sometimes the term is used more broadly to include aggregate-level media mix models. The latter have less resolution, so accuracy is greater when working with individual-level data.
Considering attribution models using individual-level data. Attribution models must consider not just paths of those who convert, but also account for individuals who do not convert. Anything short of that will produce biased, and therefore inaccurate, results.
Moreover, rule-based attribution often only considers paths of converters, and requires a "domain expert" to assign model weights. For these models, it is difficult to know if a selected model is truly optimal. On the other hand, algorithmic-based attribution across converters and non-converters is capable of great accuracy in terms of identifying which media is driving conversions.
All this said, models are only as good as the data put into them. We have found that attribution is only truly useful when paired with accurate cost and capacity information. Cost data entered into billing systems do not typically reflect that actual cost to deliver a specific impression. Moreover, it isn’t always possible to buy more of something which works well, or if you do, it may drive up price and lower audience quality. Capacity constraints must be included to make realistic recommendations and forecasts.
Considerations aside, Starcom believes that any advertiser today making large digital buys should be using fractional cross-channel attribution modeling.
Holly DiCostanzo, SVP, Director of Integrated Analytics, Arnold Worldwide, told CMO.com:
Ideally, attribution modeling would blend both art and science–with art being applied knowledge of the client’s business, the consumer journey, and marketing and media strategy, and the science being data-driven models and algorithms.
Right now many tools offer only a last-click attribution modeling approach, which tends to overstate the impact of direct and paid search and understate the impact of brand advertising. Last click is neither art nor science–it’s just convenient, and that’s why it has become a prevalent solution for a very complex problem.
The good news is that there are multiple companies who have entered the attribution modeling space and created their own innovative approaches for getting to a better model than “last click.” Many of these approaches are still rules-based (e.g., linear attribution, custom attribution) and thus are really only as accurate as the assumptions driving them, and not at all informed by what the data is revealing. Recently, however, data-driven solutions have been coming into market, and this should help improve the accuracy of attribution modeling.
It’s still an emerging space, however, and it’s important to recognize that getting to an accurate attribution model is typically a multistep evolutionary process–much like both art and science.
Shaina Boone, SVP of Marketing Science at Critical Mass, told CMO.com:
Attribution has been a bane to the digital industry for years–despite digital supposedly being so measurable. In 2007, it was a top conversation at all the measurement conferences, and then as it became too difficult to execute, the conversation about it waned. The industry needed some time to mature a bit, and it has.
Most models have some element of assumptions, which should be agreed upon by all parties at the outset. Calling it guesswork is a bit unfair, as it oversimplifies the challenges with it, subsequently dismissing all value. While it may not be 100 percent accurate, it may contain important directional information that can still lead to more informed business decisions. When its credibility is challenged, people tend to fall back on their gut, which I’d say is the real “guesswork.”
Like personalization and segmentation, attribution is also making its second resurgence. Now that technology is enabling more credible ways of collecting and analyzing the data, marketers are becoming more sophisticated in using data to make decisions.
This sophistication includes a move away from marketing mix modeling toward multisource attribution modeling, which is faster, nimble, and able to be acted on quickly. Traditional MMM is very slow and wasn’t intended for digital or for many verticals beyond CPG. Multisource attribution can be used more broadly and effectively.
Marketers cannot expect to increase their overall marketing budget or know how to mix their existing budget properly without [some form of attribution modeling]. It is critical to understand how all marketing works together in the ecosystem and what each channel contributes, even if it is loosely. The “era of the gut” is over, and we have the tools and technology to get closer to the answers.
Karima Zmerli, Group Director of Consumer Insights at Razorfish, told CMO.com:
At Razorfish we’ve always believed that a credit for an action should not be exclusively attributed back to the last exposure. Over the past five years we have been complementing the usual last-touch attribution reports with more holistic views, including all touch points using cookie-level data and custom analysis solutions.
Our technology and data road maps are relying more and more on that holistic view, and we believe that there is no future for the last-touch methodology in the era for big data; furthermore, we believe that almost any fractional attribution method is an improvement over the last-touch approach.
The tools have evolved and diversified over the past couple of years from rule-based to a statistical approach. However, not all methods are equally reliable in explaining what happens, and not all methods are equally strong in predicting optimal allocations.
In fact, Razorfish relies more than ever before on statistical approaches to assess the contribution of different channels, tactics and touch points. As we all know modeling is an art and a science, we’re committed to testing and validating methodologies and outcomes not just from a statistical/ data perspective but also based on our expertise and knowledge of digital.
I would not use the word “guess” but I would say that different approaches have different levels of accuracy/confidence, and while there is no perfect model, there is a right model. Having a good analytical model for attribution is not enough; overlaying industry knowledge, expertise around the business, consumer mind-set, and media know-how are essential for selecting the right approach and understanding its implications on future allocations.