All of us are in the business of growing businesses. Regardless of where across the marketing spectrum you sit, your job is to generate growth.

Marketing’s contribution to growth is vital, to raise awareness in or change the perceptions of target segments and ultimately, make them want to use our products and services. Growth is our core task, and should be where the majority of our time is spent.

Fortunately, there’s more research on brand growth and effectiveness than ever. Today I want to focus on one of the most effective drivers of growth: Channel testing.

To put it bluntly, channel testing means adding new channels to your marketing mix, rather than just relying on the same ones you always have. In today’s media landscape, that might mean adding streaming audio or radio, or a digital channel like Native. All that matters is that you are testing it properly.

Testing works by finding new ways to deliver growth and is applied across different parts of the marketing funnel. Traditional testing has focused on creative and message.

Testing can be open to critique. By the nature of asking or measuring a result before the launch of a major campaign we are inherently biasing testing to responses that are short-term in nature.

As a result, testing can potentially filter out long term effectiveness.

The godfathers of effectiveness, Les Binet and Peter Field suggest campaigns generate between 7% and 43% uplift from adding or testing the structure of media investment. Even at the low end, that is 7% on the biggest part of your marketing spend.

It is analysed in Binet & Field’s The Long and The Short Of It. In their analysis of the IPA Databank, campaigns that had the creative and messages pre-tested showed 10% higher sales growth in a six month period, and 10% lower sales growth over 2+ years. Not exactly the result most CMOs are looking for. Here’s how:

The Long and The Short Of It describes this outcome with a suggestion that changing testing methodology is potentially likely to change those effects:

“Hopefully, this pattern will change in future analyses of the IPA Databank, when sufficient data has been collected for campaigns pre-tested by the emerging emotionally-focused techniques that should offer better long-term prediction”

Typically when you test, you test for short term results. Binet & Field are trying to tell you not to do that. They are showing the importance of designing your testing to deliver what you are aiming for. If all you need is short term sales results, design your testing to measure short term sales results. If you need a longer-term result, test against indicators of longer-term success such as fame.

To test well requires well-designed tests.

Designing a test to be successful must start with a hypothesis. Why is this the test being selected, and what is the outcome that shows whether the test was successful? These are the key questions and need to be carefully understood. If you’re aiming for a long-term growth in brand consideration, ensure the outcome measurement and hypothesis for why the test has been selected are aligned with that aim. If your goal is short-term sales you would have a drastically different measurement and hypothesis.

One example is a recent case study by Verizon Media, who leveraged testing across different audiences to build excitement for a major movie release. Through displaying native advertising to each tested audience range, they saw 35 percent higher click through rates and 200 percent higher video completion rates for native video ads.

A solution to increasing awareness for advertisers from Verizon Media is always to test multiple audiences for optimal performance, a move which sees them often increase efficiently while securing results for clients.

Above and beyond everything else, however, is communication. What is the goal, how is it measured, and why is that the measurement? If all the people involved in the test know these you are more likely to be successful.

One error that commonly occurs in media testing is to focus the outcome in a way that biases the test before it even starts. As an example, let’s look at the easiest possible test. A marketer investing into the search channel to generate online conversions. The test hypothesis would be that other channels may be able to generate those conversions with a better result. To measure this test to a conversion rate outcome is connected to the goal.

However, there is a problem with this test. Conversion rate is connected to the goal, but as an outcome measure is flawed. Total online conversions delivered for the media investment (AKA CPA) is a much more effective outcome measure as it focuses on the end outcome.

When you’re designing your tests, focus on the end outcome. What is the goal you are aiming for, and how is that measured?

Testing can be incredibly powerful. However, to find this power, your tests must be well designed, focused on the end outcome you are looking for, and clearly communicated to everyone involved. Is that how you manage your testing?