Ratio - Header
Blog

How to become a CRO Ninja

07.09.2018

Year after year Econsultancy find that a majority of marketers view CRO as a crucial component of their digital marketing strategy, and 85%+ view it as at least very important. No doubt you do too, which is why you're reading this..

To do CRO at its best is both an art and a science. It requires an evidence based & scientific approach to finding problems, and a creative approach to finding solutions for those problems. It combines the skill sets of multiple disciplines – Analytics, UX, copy writing to name but a few. These disciplines don’t need just to be present in the team but operating harmoniously together.

Where CRO requires all of these different skills to be done at its best, the reality is that most organisations – save the most digitally mature – are not making the investments required to make it happen. As recently as 2017, one quarter of companies did not have anyone directly responsible for improving conversion rates, and another 30% had only one person.

So where does this all leave the lone CRO manager who has ownership over the entirety of a process best performed by a cross functional team? Or maybe even the ecommerce manager who has to balance CRO against a broad range of responsibilities in paid media, merchandising & email marketing?

What is self evident is that you will never be an expert in all of the CRO disciplines at once. But, that’s not to say that you can’t borrow a targeted set of techniques and practices that together, will improve the success of your optimisation program.

In our view, here are some of the key areas in which you need to skill up:

Building a measurement framework and creating a data foundation

Before even thinking about the analytics tool you’ll use and its implementation, you should be creating a robust measurement framework.

In short, Measurement frameworks lay out the reason that your website exist – how it serves your organisation – and identify the ways in which success against those reasons can be measured (KPIs).

As a first principle your analytics solution should be robust enough to allow measurement of each of those KPI’s you defined.

Ideally, you’ll also put yourself into a position to understand the factors driving the performance of each of those KPIs, good or bad. This means moving beyond the “standard” implementation of GA – just adding the tracking code to each page and maybe some basic search or ecommerce tracking.

This means being able to measure each key moment of interaction between a potential customer and your website, so that you can feed this data into your decision making. Interactions with menus, photo galleries, internal links and more might be relevant to generating or validating test hypothesis.

Finally, you need to be able to trust the credibility of the insights that you’re extracting which means setting up your analytics account itself (not the tracking on your website) correctly. For example, you may currently have the visits of employees & agencies you work with mixed in with your customers data. Clean data is imperative to make sure you’re looking at the actual behaviour of actual customers.  

Analytics and customer insight driven hypothesis generation

Your experiment hypothesis should be developed using a combination of analytics and customer insight.

You should be using analytics to understand what the key user journeys are on your website. This means being able to do some level of segmentation to understand the marketing channels and other mediums through which users are arriving at your website, where they arrive and in general what they do thereafter.

The process of mapping out these journeys will focus your attention on the areas of your website which – quantitatively – are the biggest problems. This could be either because of high drop out rate, or a lower drop out rate but higher overall traffic volume. It will also give you the context to understand where the most marketing spend is being squandered by non converting visitors. In essence, you need to understand these things to make sure you can maximise the pay off from your efforts.

But being able to identify these focus areas is only about half of the picture. At this point you still have not actually identified any problems, only symptoms.

Developing hypothesis with only this much information will mean relying upon your own assumptions and bias. To minimise these assumptions you will need to add more strings to your bow and understanding how to conduct research that is more qualitative. 

Voice of customer can take many forms and you should have multiple methods of gathering the necessary information in your toolbox. At different times you should be deploying surveys, heat maps, usability studies and more. All will provide rich context to your analytics journey analysis to help you identify what the actual problems are.

For instance, heat maps may help you identify important information that users aren’t currently seeing. Email surveys can help you refine your value proposition by understanding what customers really like about your product. Usability testing can help draw out UX issues.

It’s through solving actual customer problems (identified through customer insight) at scale (identifying key drop outs using journey analysis) that you will start to drive serious results from your optimisation program.

Post experiment analysis & hypothesis iteration

Having a thorough understanding of why an experiment hypothesis fared as it did is critical to being able to successfully iterate. Whether the experiment wins or loses.

Being able to gain this understanding is dependent upon the quality of the data foundation you’ve put in place. Lets say for instance that you A/B test a new navigation menu and conversion drops.

Proper post experiment analysis means working out based on evidence why this happened and not just having an opinion about why it did

Did you effect behaviour in unexpected ways? Did users not open the new navigation menu at all because they were confused by the new wording? Did they open it but not click a link? Are they clicking different links now to before?

Being able to answer these sorts of questions is the only way in which you can confidently identify and fix problems with your hypothesis or the execution of it. 

Optimising your project itself 

Finally, you should be trying to optimise your optimisation project itself. This may not be in the early days when you’re only testing a little bit but certainly as you increase in maturity.

There are a bunch of KPI’s you can put around your work, just like your website, to measure how well you’re doing and what you should be trying hardest to improve.

For instance, you should try and set a target around Experiment velocity – how many experiments you’re doing in a month. Provided the quality remains the same or improves, more experimentation is better.

You should measure how many variations you’re testing on average – there’s solid research to show that introducing more variations increases your chances of a win.

 Of course, you should also have some measure of effectiveness – like a win rate, to make sure you’re not compromising on quality as you increase your velocity.

Pulling it all together 

We will be running a free 5 week email course where we explore these ideas - and more - in much greater detail. Sign up below