A/B Testing

If you have a website, you probably want to constantly change appearance and behavior of website to increase some important metrics: revenue, conversion rate, happiness of users, etc.

To figure out what changes will help you to move toward your goal you can use different sources of information. You can read marketing research papers, you can use your own user experience to make decisions, and you can ask end users about their feelings. All this can help, but you will not have solid quantitative analysis about what approaches are better, and what is significance of the change.

A/B testing (also known as multivariate testing) is more scientific, more pragmatic, experimental approach.

You have a base version of a website. It is something that is now is available for customers.

You also have some hypotheses about how to change the website to increase metrics. As before, you should create these hypotheses using available information: users’ feedback, your own feelings, analytics from your BI solutions, etc.

Now you are ready to make experiments. The goal is to figure out, what hypotheses really help your business, and how much they will influence your metrics.

You implement these hypotheses and randomly separate traffic between base website version and new versions. Some users will see one version of website, other will see some of new versions.

And when you do this, you also collect metrics, in dimension of website version. You know, for example, what is conversion rate for users that use base version, for users that use Version A, Version B, and so on.

Comparing metrics, you see, what version better fits your business goals, and set this version as base (normal) for your website.

Experiment results also give you information that will drive process of new hypotheses generation. It’s ongoing process.

Simple? But powerful!