Check in with us from time to time, see what we are thinking about, and who we are listening to as we search for new and better ways of doing business.
How confident do marketers feel in their marketing measurement? Based on a number of 2014 surveys, the answer seems to be: not very.
A useful place to start is a September study from Forrester Research(sponsored by marketing software Kenshoo). The study found that “only one in nine marketers use advanced attribution methods.” The marketers who are using attribution are also falling behind on best practices. “Among those using attribution,” the study press release explains, “28% still use single click methods.” That means last click attribution (and the like) is alive and well.
The study included a survey of 106 marketing professionals—spanning brands, agencies, and analytics team in the U.S. and Europe. In other words, a lack of good attribution seems to be a global problem—or at least cross-continental one—and a problem that runs the gamut of the marketing ecosystem.
Another study by Econsultancy (from May, in association with Oracle Marketing Cloud) looked at the global problem head-on.
First, the good news: for over two-thirds of worldwide respondents, integrating marketing activities was a primary goal.
But there’s a lot of frustrating news as well. Many marketers felt they had little good insight to work with: only 43% of the respondents—less than half—felt they had a strong enough grasp of the customer journey to pivot their marketing mix accordingly. It’s no surprise that a full 62% of respondents felt that their messaging, execution, and media did not align across touch points. Meanwhile, 35% of respondents felt that their organizations were “not really” prepared for cross-channel marketing—and only 7% of the respondents did feel truly prepared. (Read more on the study in eMarketer.)
Poor Attribution, Less Mobile
That lack of confidence isn’t just impacting decisions. It’s impacting marketers’ ability to push ahead on new channels. A good example comes from another Forrester report—this one interviewing 100 brand-side marketing decision-makers. According to the study, a full 93% of marketers interviewed would increase their cross-channel investments if they could get a better handle on mobile attribution. As it is, though, only 13% of the marketers felt confident in their cross-channel measurement—and only 18% felt confident in their ability to gauge mobile ROI. Less knowledge means less investment in critical channels.
The Value of Marketing
The lack of confidence is moving up the corporate ladder, too. This year’s CMO Survey from Duke University’s Fuqua School of Business, the American Marketing Association, and McKinsey & Company finds that a full 62% of marketers feel pressure from their CEO’s and/or boards to prove the value of marketing. Marketer’s aren’t just facing a loss for the data that helps them manage their work; they’re at a loss for the data that helps them to prove their value.
There is a silver lining through all this. Marketers are aware of the problem they face, and they know where to find solutions. To cite just one example: the CMO Survey also finds that marketers expect to increase analytics spending by 73% over the next three years.
Here’s to great insights ahead.
Add a comment
If you want to know how powerful marketing analytics can really be—and how to do it well—just look to the companies that do it best. That’s one of the key lessons of the Marketing Analytics Leadership Award (MALA), established by the Association of National Advertisers (ANA) to champion advancement of marketing measurement and accountability. MALA is a competition showcasing the companies that lead in analytics innovation and effectiveness. (The award is presented by MarketShare.)
Marketing trade publication Warc recently ran an in-depth piece about each of the three 2014 MALA finalists, highlighting the planning that goes into a strong analytics program—and the results it drives.
The numbers were impressive, to say the least:
- Management consultancy McKinsey “has reported that adopting an integrated approach to analytics could free up 15-20% of marketing expenditure”.
- Through its use of analytics, telecommunications and technology services company C Spire increased the effectiveness of its customer retention campaigns by 50%, and boosted its cross-selling by 270%. (C Spire was ultimately named the 2014 MALA winner.)
- Mobile workspaces leader Citrix can now “validate scenarios and predict business impact with at least 85% accuracy”.
- Computing and communications component company Intel now has greater insight into marketing impact than ever before. Edwin Derks, insights and market research leader at Intel, explains that the models his teams have built “are, in essence, capturing relationships between our marketing activities, external drivers of our business success along with what it brings us in terms of business success.”
Another key takeaway was that getting cross-departmental participation is a key to analytics success. Intel, for instance, works with models that are “fully supported and built both by marketing and finance.” In other words: to unite marketing and financial data, you need to unite the teams. (The reverse is also true: the data unites the teams as well.)
And how should marketing analytics groups gain that cross-departmental support? Justin Croft, manager/brand platforms and analytics at C Spire, advises to "Pick a high-profile, high-priority area that everyone will be thrilled to have insight into. Then the skillsets, resources and technologies will flow from there."
Of course, not everyone is satisfied with the toe-in-the-water approach. Judith Breisch, staff marketing operations analyst in Citrix's marketing operations and analytics team, comments: "We originally started small – one product, one country. We now wish we had done global, portfolio from the very beginning."
Whether you start big or start small, the message is clear: a well-orchestrated analytics program with cross-team support drives powerful results.
Add a comment
By this point in marketing history, it’s fairly unanimous that “last click” marketing attribution—giving all the credit to the ad that leads immediately to the conversion—is sub-optimal analytics. But while no sophisticated marketer would rely on last click alone, the spirit of last click is, unfortunately, alive and well.
Essentially, last click attribution is just one example of a wider problem. Marketers have a lot of readily-available data about a portion of the purchase funnel—and a lot less data about the other factors that drive consumers to a decision. In last click, that manifests itself in focusing analytics on one touch point, like email or search. But there are many more ways marketers still miss the full array of consumer decision drivers, and end up giving ads more credit—or more blame—than they really deserve.
For a few examples of more sophisticated “last click” problems, read on.
1. Not looking beyond the ad
There are a lot of things that drive consumers to act—or that drive them away—that have nothing to do with marketing. If consumers buy more umbrellas when it rains, less gas when prices are high, or refuse to buy sub-par products, it’s quite possible that advertising had little to do with the decision.
In theory, that’s obvious. But in practice, external impact can be a lot harder to identify than you might think. And so it’s endlessly common for marketing campaigns to get the credit—or the blame—that really ought to go to those external factors. That’s why it’s critical to investigate not just what marketing factors are driving purchase decisions—but what non-marketing factors are driving customer activity, too.
2. Ignoring the halo effect
Say you’re a small sneaker brand running three ad campaigns concurrently. One ad is for your basketball line, one is for your running shoes, and a third is for the parent brand. Your data shows that your running shoe ads perform very well. But should all the credit go to the running shoe ad—or should some go to the other two ads as well?
If you’re active enough in marketing, it’s likely that consumers have seen a lot of your ads. They’ve seen ads for your multiple products. They’ve seen ads for your brand. They may even have associations with your advertising that go all the way back to childhood. These are powerful influences that will inevitably impact your ad effectiveness. And if you’re not accounting for the impact these “halo effects” have on your message, you’re potentially misreading the real contribution any given ad is contributing to your bottom line.
3. Ignoring the sequence
It’s not just your multiple campaigns that impact the effectiveness of each ad. The multiple ads within a campaign impact each other. That’s why a consumer who’s seen your ad once might not respond to your message, but might convert after seeing the same message four times.
This means a lot for attribution. When you’re measuring ad effectiveness, you can’t just look at how the single ad unit, or even the single message, performed. You need to understand how the message has followed consumers across their entire customer journey—from channel to channel, and within channels, all the way to the moment of conversion. Otherwise, you end up giving credit to an ad, when in fact the ad may only perform well across a specific sequence. You’re leaving critical information out of the picture.
To be sure, there a lot more ways that “last click” thinking persists. But you get the point: getting attribution right is a cross-channel, holistic task. Anything short is leaving data, and ultimately revenue, on the table.
Want to learn a bit more about the state of attribution today? Check out the latest Forrester report on cross-channel attribution providers.Add a comment
Want to get beyond data grunt work? Try these steps for data setup.
With so much information spread out across various workflow software, Excel files, and even scraps of paper, it takes enormous effort to gather all the data, and to normalize it so all the information can work together toward a consistent story. It’s not much of a surprise that the grunt work of “data wrangling” takes up to 80% of all data effort.
How do you keep the data wrangling to a minimum? Setting up your data in the right way is key. I’ll offer a few best practices below.
Know what you’re getting
When a retail supplier ships goods to a store, everyone is in agreement on what’s being shipped—so the folks in the store can easily take the items off the truck, and on to the shelves.
Shipping data is no different. As clients (including internal clients) hand their data over to you, one of the first questions you need to be able to answer is: What data am I receiving right now? The more precisely you can answer that question, the more effectively you’ll avoid ambiguities—so you can simplify data processing down the road.
For this to work, it’s important that both the providing party and the accepting party are in agreement on the data content. This means being as explicit as possible about nuances like how the data was collected (rarely obvious without some kind of meta-data). It also means specifying ambiguous terms—like whether the “dollars” you’re referring to are US or Canadian (you’d be amazed how often people get tripped up on that one).
Repeat it back
At the end of the data acceptance process, you have to feel that you own and understand the content. Which means that part of the acceptance process is articulating assumptions, looking at summaries and trends, comparing data to other data sources. To make data acceptance really work, take the time to articulate back to the provider what you found in the data provided. That makes the data trusted and memorable. It also introduces common terminology and brings people on the same page. And you’d be amazed how much that articulation forces you to conquer data formatting problems up front.
Finding the analysis-ready data points
Once you have all the data in hand, your next job is to figure out which data will actually be useful for your analyses—and what might be best to keep aside. You’ll need to strike a subtle balance between storing as much data as possible—and not trying to boil the ocean. To find that balance, ask yourself a few critical questions:
- What data readily falls into the scope of the project, what clearly falls outside of it—and what lands in between?
- What data will provide the best insight? Compare multiple sources for quality, granularity, data collection methodologies and resulting differences in data volume, coverage, and trends over time. When you see the range of the data you have, you’ll have a better grasp on which data is the most valuable—and which might be a waste of time, given your other options.
- Where can it fit? Some data can be accepted in whole; other data might still be valuable mashed with other data sources to create a full picture.
- Where are the hidden gems? A lot of data will contain interesting facts that might look like errors or inconsistencies to an untrained eye. Keep an open, creative perspective that lets you realize where an “unwanted” piece of data might really be valuable—so you can uncover the less-obvious data points that are especially (and unexpectedly) worth keeping.
Beware of over-normalizing
Once you have all your data in hand, you’ll want to normalize and harmonize the data into something consistent and usable to lend itself readily to insights. This is where shape of the data becomes important. But beware of over-normalization. Since normalization requires reducing information to just a few variables (to make apples-to-apples comparisons easier), there’s always the risk of scrubbing the data so well that you rub away the critical nuances. And those nuances can be painstaking to put back in to the mix once you realize you need them. To save time up front, be clear in defining your analytical variables. In other words, keep your data’s “native tongue” intact – if it speaks in terms of marketing spend, revenues, or units in stock, do not translate it into amounts and counts (even if your “inner engineer” is pushing you to common terminologies).
Of course, none of what I’ve described here is exhaustive. And I’ve left out the ways that automation can be an enormous resource—a topic I hope to return to with a follow-up post on data agility. But what I hope I have offered is a catalyst to get you thinking about taking the data wrangling out of the data process—and putting the real analysis back inAdd a comment