Pre-testing ads is not divisive, it’s a no-brainer

Pre-testing ads is not divisive, it’s a no-brainer

Agencies will complain pre-testing snuffs out the creative spark, but in reality it helps brands identify the best-performing ads and make them even better.

In the middle of summer last year, the team at Sky was thinking about Christmas. Specifically, about the ads for its subscription TV channels that would run during the all-important festive period. At the top of the list was Sky Cinema, its premium movie subscription channel. On the table were six different creative ideas for the Christmas campaign. Each had its respective merits and internal sponsors. Each went down a very different route.

In a bygone era, these options would spark a political shit-fight of epic proportions as different factions within the client team and agency made the case for their baby over the others.

But Sky’s marketing team is better than that. They pre-tested all six. Each was turned into an animatic and assessed by System1. The results suggested a clear winner, a simple carousel of great movie moments edited into an emotional 60-second hymn to the power of cinema.

It was the least flashy, least complicated proposal. The team was sure they needed something bigger and more eye-catching. But according to Ben Case, managing director for consumer strategy at Sky, the results were so clear and the ad so superior, that the other five options were put to bed immediately, and work on the 2023 Christmas ad began within a few days of the research (see the end result below).

 

Pre-testing advertising is one of those ancient chestnuts that splits the marketing community right down the middle. Half the CMOs I have worked for will tell you it’s unnecessary. That they do research but only to set up their strategy. Which they brief into their agencies. Agencies that they trust and rely upon for creative leaps. Leaps that might be shortened or compromised by cycles of research. Using data at such a late stage in the process and for something as mercurial as advertising creative makes no sense.

But the other half head in the opposite direction. They are market-oriented. They believe in research. Why would you not test probably the biggest single marketing investment of the year? If you believe pricing or product development is bolstered by consumer research, why would you not think the same about advertising?

To push back against pre-testing is to reveal a lack of awareness.

Of course, there is also the agency perspective, which – like the agencies themselves – is a little outdated and precious and misses the bigger picture. Research reduces the big idea. Lessens the disruptive power of great creative. And, generally, puts constraints on a model that should be left to its own devices. It was exemplified beautifully this week in the new episode of the On Strategy podcast. Host Fergus O’Carroll interviewed Sofia Colucci, the CMO at Molson Coors. Both were clear on the “dangers” of pre-testing and following a pre-set appraoch at the expense of standing out.

“We have to be careful not to fully buy into this,” O’Carroll notes in his intro to the show on LinkedIn. “Our job is not to beat the test, it’s to build the business. Conforming to a type, or a formula, will result in a lack of originality.”

Until a few years ago, I would have sided with O’Carroll and the CMOs who did their research and then trusted their agencies to deliver the creative goods. But pre-testing has changed. Is changing. And with those changes, it’s time to acknowledge that to push back against pre-testing is to reveal a lack of awareness of these changes, and the opportunities they present to avoid failure and improve effectiveness.

Evolutions in testing

So, what has changed? First, the things we pre-test. Second, the way that pre-testing happens. Third, the predictive capabilities of pre-testing. And fourth, how long all of this takes.

One thing everyone agrees on is that the pre-testing of old was one step from pointless. A guy with a handlebar moustache and flares would create an ornate storyboard. Then someone in a tank top, smoking a cigarette would act this out for a small number of target consumers in a room. The thing being pre-tested was so far removed from the potential ad that would then have been produced it rendered the whole pre-test pointless.

But things change. These days it’s possible to create an almost scene for scene animation quickly and without great expense. It’s not the ad itself, but it’s nine hundred percent closer than the black and white sketches of old.

See the Carlsberg example in the video below:

And compare that with the finished ad here:

In the past, once the basic storyboard was created, traditional pre-testing then assessed it using what can only be described as a bad focus group. Having seen the badly acted-out script, consumers were quizzed whether they liked the potential ad. How could it be improved? Did it make them want to buy more of the brand’s stuff? It really was that basic and that pointless. And if an agency or client had an in-house favourite it was relatively easy to ensure the pre-testing outcome went in whatever direction was desired. Most of the current criticisms of pre-testing are actually outdated references to this earlier, crapper age.

Today’s pre-testing approaches rely on pre-recruited representative panels, digitally delivered animatics and much more accurate metrics. System1, for example, which has come to dominate the field of pre-testing in a remarkably short period of time, has brought much needed consistency and simplicity to a growing army of clients who use its service to test and refine their upcoming advertising.

The company can pre-test any stage of draft advertising, usually within hours, not days, and come back to the client with predicted ‘spike’ and ‘star’ scores for their ad. The spike store predicts the short-term sales impact of the campaign. The star score provides a longer-term prediction of the brand-building impact the ad will have.

When the money involved in running these ads is so huge, the tiny costs of pre-testing make it a no-brainer.

The billion-dollar question, of course, is whether these assessments really can predict the actual impact of the campaign being tested. It rankles many marketers when System1 or Kantar offer up early-bird assessments of Christmas ads or the upcoming array of Super Bowl commercials. “Don’t we have to wait for the actual impact of the ad on the market?” many will say. “Can these things actually predict the future?”

That was Sky’s question when it looked at using System1 to pre-test its own advertising. The UK broadcaster is not only one of the country’s biggest advertisers, it’s also among the most savvy too. While Sky liked the idea of hiring System1 to tell it which of its ads would and would not work, the data-driven brand was worried System1’s apparently simplistic approach would leave a lot of room for error. The team was sceptical to say the least that System1 really could pre-test with such predictive power.

So Sky did something every marketer should do: it tested the testers. The brand already had advanced econometric data on previous campaigns stretching back more than four years, which reviewed the performance of all of its ads and what they had or hadn’t done for its business. It asked System1 to go back into its archive and rate all the Sky ads’ long- and short-term impact. It then compared what each ad actually achieved, according to Sky’s assessment, versus what System1 predicted it would have done.

And the results were stunning. On both short-term sales impact and longer-term brand building, System1’s predictions were incredibly accurate. Multiplying out the System1 prediction with the amount of media money Sky invested in each ad produced a near-perfect correlation with Sky’s own econometric results.

Sky now pre-tests around 400 commercials a year. The process not only enables Sky to produce generally better ads, more consistently. It also reduces the amount of painful organisational tension around what is and isn’t the best ad. It speeds the process and, as in the case of the Christmas ad, enables the company to quickly pick a winner and focus on it. When the money involved in running these ads is so huge, the tiny costs of pre-testing make it a no-brainer. Sky’s Ben Case explains: “Spending thousands on pre-testing our creative to make the millions spent on media work harder pays for itself many times over.”

And pre-testing is not just about selecting an ultimate winner. It’s about improving that winner even further to extend and advance its effectiveness. Sky, like many other clients, uses System1 to improve its ads while in development. Where to add distinctive brand assets. Which audio to use. When to cut away. When to leave the shot lingering.

Remember the point of advertising

By now, every agency creative reading this has their hands gripped in fits of rage. They are probably shouting about creativity, disruption, standing out and the fallacy of testing. But so be it. There will always be a precious, artisan perspective about advertising that gets in the way of its ultimate role to commercially benefit the client’s business. Yes, on occasion, there is a black-swan ad that tests poorly but goes on to prove incredibly effective.

But then there are the 999 other examples in which a shit ad tested badly, and a great ad tested brilliantly and was subsequently made better from the pre-testing. Clients are not in the business of taking risks for the sake of the long shot of creative infamy. Well, the ones that did not come from an agency at least.

Never forget that the ads relegated by the advertising industry to ‘turkey of the week’ in Campaign usually perform significantly better than the average ad. Or that the winners of the various Cannes awards for creativity usually underperform an average basket of ads. Agencies tell us pre-testing does not work (it does), while claiming they have an in-built radar for creative success (they don’t). Like everyone else involved in marketing, they need to learn market orientation. They are not the consumer, they are the producer. And that paradigmatic gap can only be bridged with research.

Jon Evans from System1 once told me a horror story. He was working for a big client about to change its main brand campaign. He arrived at a client meeting with data from the company’s previous campaign and its proposed new one, and showed the team conclusively that the old campaign was significantly superior to the new one it planned to launch. In the end, the client accepted Evans’s point and stayed with its old execution. But rather than celebrate the money saved and the increased effectiveness ahead, both client and agency were despondent.

Why? They wanted to make a new ad. Shoot it. Get excited about it. Launch it. Make a nice film. They missed the point of why they were there. Not to make ads but to build brand, drive sales and ultimately progress the business.

Pre-testing has changed. It is now a marketing no-brainer. You need to do it. Agencies might push back. Let them, it’s not their money. If it were, perhaps they might think differently.