Matthew's logo

Matthew Talebi

My portfolio and research about accessibility, design culture and sustainability to make life a tiny bit easier.

Apptimize

A/B testing and analytics for mobile applications.

During my time at Apptimize, one of my main achievements was giving a design overhaul to the entire product. One of the key tasks during the design was taking a look at why and how our users were setting up experiments. A key success metric was on how many users setup goals in the analytics platform for their experiments.

The problem

The existing setup process for each experiment consisted of five different steps, the fifth being the experiment results. In the results users may setup what they would like to measure. The problem was, users would setup experiments without goals, and abandon the experiment; often leading to abandonment in creating experiments altogether.

Existing five step path

After doing some research and talking to customers who displayed this journey, we discovered two key points:

  1. They don't know what they want to measure
  2. They aren't sure how to measure if an experiment is successful

Measuring goals

Let's take a step back to your science school courses. Normally when you have a hypothesis you start an experiment, not the other way around. So we had our own hypothesis, why not force the users to set their goals up front. If they don't have a goal, why is an experiment being set up? The users have the option to skip setting up the goal, but we discourage this by subtlety.

As for the rest of the journey, we looked at combining steps three and four from the original journey, that is the target and launch step. Let's have the users target and immediately launch from the same page (with a confirmation prompt). Previously these two pages had the same information, only one allowed you to change the target settings, and the other let you launch the experiment.

Step two goal screen UI

Notice that results are not a part of the flow, instead it is hierarchically placed at a higher level, at the same level you enter the configuration of the experiment. This is to separate the two stages, and when an experiment is running (or has been run) the user will by default land on the results page.

Journey map

The consequence

What was the consequence of this change? Fewer experiments were created. Fewer experiments were created, but the ones that were had goals setup and results were measured. This result goes to show that higher activity doesn't always prove as an accomplishment. Instead if less users are using the product but using it correctly, then it may be considered a win. But where did the rest of the users go?

Surprisingly, many calls were going to customer care, questions coming in about what is a goal, how do they decide on a goal, etc. This enabled us to educate our users, help them setup experiments, and really consult them on how to run experiments and what they should be measuring for goals.

With the advent of customer service calls, we naturally changed the way we onboard and sell to potential customers. Training from the beginning and really helping them understand the product, how it will help their own product grow and build engagement with their own user base.

Results redesign

The final piece of the puzzle was the results. We did a total redesign of the results page to make the data easier to read and included sub-captions for the various terms to help train our customers in the product. Redesigning how funnels appear, and crating clear legends to and separation of data to help guide the users through the data in a logical way.

Results interface
Funnel design

Next project...