Increase the impact of your testing program in 2017

Posted by

test-experiment-science-chemistry-ss-1920Many organizations are still new to the world of testing and optimization. Indeed, our “State of Digital Marketing Analytics in the Top 1000 Online Retailers” report showed that just 18 percent of the retailers ranked number 501 to 1000 in the list of the largest retailers are using testing and optimization tools.

For organizations like these, selecting and implementing a testing platform will be crucial to building an optimization practice. However, for organizations which have already established testing and optimization as a practice area, other challenges quickly come to the fore.

Consider the following scenarios. I suspect some of them will be familiar.

  1. Similar or identical tests are running on different areas of the website, without being coordinated.
  2. Similar types of tests are performed repeatedly (e.g., changing the placement of a button), without drawing on learnings from previous iterations.
  3. Staff working in the testing and optimization area receive testing ideas from a variety of stakeholders (which is a good thing!) without a clear rubric for prioritization.
  4. The volume of tests being run, and the time investment associated with that volume, “crowds out” deep analysis of test results.
  5. Testing ideas may not clearly “ladder up” to corporate, marketing or campaign goals.

Each of these situations can arise within an organization that, on the surface, is doing a great job of developing testing hypotheses and putting experiments into production.

However, there’s a common theme here: Without some coordination and process in place, it’s easy for testing to become an uncoordinated, somewhat random practice that doesn’t generate the results it otherwise might.

If you’re facing some of these scenarios, or are simply wondering what you can do this year to take your testing practice to the next level, consider doing the following.

Create a testing & optimization ‘steering committee’

Demand for testing can come from all across the organization, as well as from its partners and agencies. Once again, this is a good thing — many organizations would love to have their stakeholders clamoring for more testing. However, a diverse set of stakeholders asking for testing support can quickly lead to uncoordinated, inefficient testing efforts. Here’s what that can end up looking like.

testing stakeholders1

Each stakeholder group asks for testing in hopes of maximizing its own particular goals, and the testing team, in turn, implements experiments to the best of their ability. However, this is suboptimal for a variety of reasons.

First, consider your “content owners.” They are responsible for different areas of the website, but they may be asking for identical, similar, or at least related experiments — which, with better coordination, could be implemented in a more scalable way.

Second, consider your internal marketing teams or your agencies, as the case may be. It’s common for different groups to own different channels, such as social, display and paid search. Without some way to coordinate efforts, testing hypotheses naturally wind up being channel-centric rather than customer-centric. Optimizing across the customer’s entire journey may yield better results than attempting to optimize individual channels in a vacuum.

The list of potential stakeholders can go on and on. Maybe you’ve got a User Experience team asking for improvement. Corporate marketing, IT, external consultants and many others may ask for support. This leaves the testing team in a bind. With so many stakeholders, the quantity of experiments being run is sure to increase. But is the quality?

To help deal with these issues, consider creating a testing “steering committee.” This group’s role should be to help mature individual, specific testing ideas into hypotheses that can make a coordinated, broad-based impact on your business as a whole. The group should have representatives from different marketing channels, product categories, internal teams and so on.

It might sound like this layer of management would slow down the testing process, but I’d argue that’s a feature, not a bug. Indeed, by vetting the individual testing ideas coming out of the stakeholder groups, this steering committee can help the core testing team focus on launching tests with the greatest likelihood of making a broad-based impact.

testing stakeholders2

If your KPI is simply the quantity of tests launched, you’ll probably see that this structure results in fewer tests being run. But, of course, most of us care much more about the impact of the tests being run.

By implementing a structure like this, you’re less likely to run duplicative/redundant tests, you’re less likely to have channel-specific optimizations happening in a vacuum, and you’re more likely to have experiments launching that are thoughtfully designed to make a positive impact across channels.

Create a testing road map that meets overarching goals

Your steering committee will help you cut down on the quantity of tests being run, promote a focus on impact and quality and ensure that tests aren’t designed to serve just one silo of the business. Still, it helps to have a “north star” to guide your testing efforts — in other words, optimization efforts are more valuable when everyone understands how to connect them back to the organization’s overarching goals.

For example, imagine a retailer with several categories of merchandise. 2017 goals may include maximizing first-time customers and improving margins in Category X. When goals are simple and clear, it becomes much easier for stakeholders to build a business case around their testing ideas.

By the same token, it becomes much easier to prioritize which tests are most likely to make a meaningful impact, and which ideas need further refining before going into production.

In this scenario, you might prioritize experiments designed to improve funnel progression for first-time site visitors. Similarly, you might run a series of tests to maximize revenue from upsells or cross-sells in Category X. Other testing ideas, while still potentially valuable, might be deprioritized and launched when other experiments aren’t in the pipeline.

So, if organization-wide goals aren’t clear or haven’t been set, that’s the place to start. Ensure that your testing team and your steering committee are on the same page in terms of what the org-wide priorities are, and ask them to create a “road map” of testing ideas that clearly meet those priorities.

A good road map will show how fundamental learnings from early experiments will inform the development of future tests designed to increase performance even further. At the end of a year, you should be able to look back and see how you’ve remained focused on your most important goals and used data to optimize toward them as part of a well-coordinated, step-by-step process.

Ultimately, taking your organization’s testing prowess to the next level isn’t just about the latest technology. Of course, you may reach a point where you’ll need more complex tools to accomplish your goals. But for many organizations, all the necessary tools are already in place.

Instead, putting some more formal structure and process around your testing practice may be all you need. Remember, launching fewer tests — but launching tests with a much higher potential for impact — will be a sign that you’re making progress.

Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Nick Iyengar is an Associate Director of Digital Intelligence at Cardinal Path, where he is responsible for helping his clients improve their profitability by building their analytics capabilities. He recently returned to Cardinal Path for his second tour of duty, having completed his MBA at the University of Michigan Ross School of Business last year. At Cardinal Path, Nick has led Google Analytics implementations for dozens of organizations in a wide variety of industries. Prior to joining Cardinal Path, Nick began his career in digital analytics at Google, where he managed Google’s Analytics Guru team.


Leave a Reply

Your email address will not be published.