This article explains when you should use an experiment, how to determine a winning variant, and some best practices to get the most out of your experiment.
What is an experiment and when should I use one?
An experiment lets you test out two or more variants of your Experience so that you can learn what variations of your content work best. For example, if you want to improve your sign-up flow, you may choose to test different colored sign-up CTAs.
The video below explores this new feature.
When should I use an experiment?
We recommend creating an experiment when you want to find out what variation of your content resonates best with your audience. In our example, you would serve a Tribe of Visitors blue, red, or green sign-up CTAs in order to find out which colored button encourages the most amount of audience members to sign up for your platform, product, or service.
You may then choose to promote the winning variant to a Scenario. Continue reading to learn how you can select a winning variant.
How will I determine a winning variant?
We recommend that you run your experiment for enough time to identify how your audience interacts with your variants before selecting a winner. You’ll be able to view and track this with the data provided in the Activity tab.
You may determine a winner based on the Served total, Event total, and Success rate. Continue reading for definitions of each metric.
- Variant allocation: The percentage of your selected audience that will be targeted with a particular variant. This is important context, for example, if variant A has an allocation of 10%, this will mean 10% of the selected Tribes will be served this variant and may explain why the served total is lower compared to other variants.
- Served total: The number of times this variant was served to your selected audience within the selected timeframe.
- Event total: The number of times your chosen event occurred within the selected timeframe. For example, if you were to create an experiment to optimize your sign-up CTA, you may choose the event Clicked.
- Success rate: Success rate is calculated by dividing the Served total by the Event total. Continuing our example, the success rate would be calculated = Served/Clicked.
The Activity tab reports metrics using the day count selected i.e., 7 days. Following a 30-day period, per GDPR rules, we discard your user data.
How do I get the most out of my experiment?
Continue reading for some best practices for running your experiment.
How long should I run my experiment?
We recommend that you run your experiment for enough time to identify how your audience interacts with your variants.
Can I run multiple experiments at one time?
Each member of your selected audience will be served a variant of your experiment. This may complicate what each user or visitor is served what if the same Tribe is involved in multiple concurrent experiments. We recommend that each Tribe only be part of one test at a time.
Can I run my experiment again?
Starting an experiment again will reset existing data. If you wish to keep any data from the last 30 days, we recommend creating a new experiment.
What to read next?
Learn how to set up and manage your experiment.