Ratio - Header
Blog

Why opinions alone aren't good enough in Optimisation

24.01.2018

Why opinions alone aren't good enough in Optimisation

The objective of your optimisation program is probably to drive more business value – be it in the form of improved customer satisfaction, more transactions or leads, etc.

Relying upon the opinions of those within your organisation will not allow you to maximise how much of this value you can deliver. Often, it will in fact be detrimental – causing you to pursue solutions to imaginary customer problems, or develop and execute on the wrong solutions to real ones.

Consider the following case study:

Shortly after I started in digital at an eCommerce company, senior leadership made the decision to rebuild our navigation menu. Considerable effort from our best designers and developers was put behind this test – precious commodities in any organisation.

Two test variations were launched and after several weeks we found that conversion rate actually went down under both. What went wrong? Why did users not take to the new menu? We could only assume.

This case study will demonstrate some of the problems with relying upon your own opinions in deciding what and how to test. Here were the problems with our approach in greater detail:

Problem 1: We relied upon opinions and assumptions to determine that an opportunity existed – and what its size was

The decision to redevelop the navigation menu was not based on data or insights, it was based on opinions. The old navigation menu was viewed to perform poorly/look dated, and had to go. But why? And why now?

We should have set out to understand two things

  • Did headroom actually exist to optimise the navigation menus performance?
    We could have arrived at this understanding by answering questions such as “What percentage of users who engage with the nav do so effectively?”

  • Was it actually a pain point in the first place?
    For this we needed to get into the minds of actual users – “are there things you can’t find in the nav?”, “Are there elements of it you find confusing?"

Problem 2: We prioritised our hypothesis based on opinions, not data

On a fundamental level, test hypothesis should be prioritised against each other on the basis of impact and effort. How much value will be returned and how much effort will be expended to obtain that value?

We did not redevelop the navigation menu because we knew for sure that on net terms, it was a better decision to make than a restructured search result, a new product detail page design or a quicker checkout flow.

We should have prioritised these opportunities objectively and dispassionately.

Problem 3: We developed and implemented a solution without voice of customer input

Our designers developed a new navigation menu based on what they thought would work best, with iterations based on feedback from the wider organisation.

What was lacking in our solution was the voice of customer. We didn’t design a solution around problems our customers identified, but around our own assumptions.

We didn’t prototype our solution and test it with actual customers before expending the effort to develop it. Had we done so we might have received feedback that made us change our approach.

Resources like Usertesting.com are great for this

Problem 4: We didn’t have any way to learn by understanding what factors drove the result we got  

Many of your conversion tests will not succeed, this is a fact – no matter how good a hypothesis and how well executed.  

The fact that a test has not succeeded does not mean that there is no value to be extracted from it. Often in a winning or losing test the most valuable thing you can understand is the “Why”.. Why did the test win or lose? What behaviour change did we drive vs what we expected?

In order to get these learnings, you will need robust tracking in place. Web Analytics products are not strong at illustrating how customers are interacting with your pages out of the box and you'll need to do some planning to make sure you're ready to capture what you need. 

Using my case study as an example we should have had tracking in place to understand when:

  • Users opened the navigation menu
  • Users opened a level 2 navigation menu
  • Users clicked a link in the navigation menu and where that link was

We had none of this tracking and so again, only had assumptions to go on about why the test failed.

Had we learned for example that in the new navigation menu, more users were opening the top level nav but failing to open the sub menus, this might have helped us iterate and arrive at a winning design

In short, relying upon opinions alone is simply not good enough to deliver an optimisation project that creates real value. You need:

  • Data to show there's actually an opportunity and where it sits relative to others
  • To learn from customers what the problems actually are
  • To test your solution on actual customers
  • To have robust tracking in place so that if there is a failure, you can still extract some value from it.