Experiment data¶
With experiments we mean both personalizations as well as “basic” ab-tests, since both of them are experiments on improving user experience.
Tracking the impact of these experiments is key on improving your website.
First we describe how we work with experiment data and after that we describe common pitfalls.
How does it work?¶
Basically, tracking an experiment has two main sets of information:
- Which experiment(s) is currently active for the user?
- Which variant of these experiment is user put into?
A user can be in multiple experiments at the same time.
The goal of tracking this information is to:
- Be able to see the impact of different variants of an experiment
- Understand the impact of running multiple experiments/personalization
We need the following to make this happen:
- Link all events to an experiment from the moment a user enters the experiment until the experiment stops.
- Do not link any unnecessary events to an experiment.
- Use one variable for all experiments
To do this, we have created a mechanism that waits for AB-testing tools to load, before we sent any events. This way, we will be able to link all events to the experiment. We have also created integrations with tools that will make sure that we keep the current experiment/variant combination during the time an experiment is live.
The moment an experiment is paused, we will drop the experiment/variation combination. This could happen during a session. Since we link the experiment data on an event level, we do not link any events that were not relevant for the experiment.
The syntax of experimentData is as follows:
<experimentA>:<variation of experiment A>,<experimentB>:<variation of experiment B> 111222333:433233,888444233:123123
In the above example, the user performed an event where he was in two experiments:
1: Experiment ID 111222333 with variant ID 433233 2: Experiment ID 888444233 with variant ID 123123
In the image above you can see an example. If you want to check the data yourself, read our console debugging documentation.
You can link the experimentData variable to for example a Google Analytics custom dimension and/or you could use Harvest Store to do analysis on your experiments.
Common mistakes¶
There are many mistakes made when analysing and collecting data for experiments. Below you will find some common mistakes.
Variable scope¶
The scope of the variable is very important. Usually scoping leads to major data quality issues. In Google Analytics we see the following scenario happening:
A custom dimension is set to user scope. When a user triggers an experiment, the custom dimension gets the experiment/variant set to it. Then upon analysing the data. The analist forgets that all data of the user is taken into consideration, because the scope is set to user.
This is wrong. It is very important that the scope is set to hit. Why? Because user and session scope are too broad. It can happen that only the last event of a session triggered the experiment. You do not want to have all data of the session being attributed to the experiment.
Multiple custom dimensions¶
Another problem we often come across is that for each experiment a different dimension is used. This is often the result of invalid scoping, because when you have a session or user scope and a user triggers multiple experiments, data will be overridden.
You should use only 1 dimension / variable (with hit scope). Why? Because then you can also detect what the effects are when a user has been engaged in multiple experiment at the same time.
Segmenting based on a experiment started event¶
To solve the problem with dimensions, often the error is made to use an event and segment based on this event. This practice will result in a similar data issue as with the wrong scoping. Why? Because segmenting can only be done on user and session level.