Good website design and management is not based on feelings or personal preferences. Rather it’s based on data and facts that move you closer to your goals. That’s why I love A/B split testing. It’s all about figuring out the correct path for your design based on hard data.
If you’re serious about measuring the ROI of a website, then A/B split testing will help you work towards the best design by allowing you to test different options against each other to discover which is the higher performing design. You start by picking a key conversion metric (like a buy button, a sign-up button, or some other call-to-action or metric), and designing two (or more) versions of the page. Visitors to the page can be randomly served one page or the other, and the tracking tools will measure how often your desired conversion happens on each page. Once you determine a winner, then you can direct all traffic to the final, higher-performing page.
You should also consider incremental design using A/B split testing. Once your initial experiment is complete, you can try again with the same page by changing another element, allowing you to continue improving the page’s performance over time.
There are many tools available to help you with A/B split testing. But everything you need to run a split test experiment is available for free in Google Analytics. Here’s what you need to do to set up your own experiments:
Step 1: Decide What You Want to Measure
The first thing you’ll need to do is to determine what you want to measure. Is it a site metric like pages per visit, or length of time on site? Or is it getting to a specific page like a sign-up form, or a purchase “thank you” page?
As you define your desired outcomes, you’ll need to create multiple versions of the web pages you plan to test hoping to achieve that outcome. Each of these two (or more) pages will have something different in their design. While you can test two pages with completely different designs, it’s best to test smaller elements of similarly designed pages. Test things like placement of the call-to-action on the page, or the colors of sign-up forms, or the wording used in the header text on the page, or whatever other option you want to test. Whatever it is, create the pages with your desired outcome in mind and how you think you can improve conversion rates with your page variation(s).
Once you have your split test pages created, you’ll be able to set up the goals you’ll need to measure their success.
Step 2: Create Goals in Google Analytics
Once you know what it is you want to measure, then you’ll need to set up the Goals so that Google Analytics can track the conversion rate on those events. Goals are good to track regardless, but you’ll need specific goals to use for your split test experiment. Here’s how you set those up in GA:
- Go to the Admin tab in Google Analytics
- Select the profile you want to add your goal to
- Click on the ‘Goals’ tab
- Click the ‘+ New Goal’ button
- Select the option for either an existing template or a custom setup (most likely a template)
- Complete the Goal Description by giving it a name and selecting the type
- Complete the Goal Details with the desired outcome/values for your goal type
- Click ‘Save’
Once your goals are set up, then you’ll be able to create your split test experiment.
Step 3: Create Your Split Test Experiment in Google Analytics
At this point, you should have two (or more) versions of a web page you’ll be testing, and at least one goal you’ll be using to track and compare the pages. With that you’ll be able to set up your split test experiment in Google Analytics.
- Go to the Reporting tab in Google analytics
- Select ‘Experiments’ in the ‘Bahvior’ menu
- Click the ‘Create experiment’ button
- Set name and objective for the experiment
- Configure your experiment with the original page and variations
- Insert your experiment code immediately after the head tag for the original page in your test (Google Content Experiments plugin)
- Review and start your experiment
Your experiment will run for a period of time (Google defaults it to 30 days) tracking the goal conversion as it sends visitors randomly to the original page and each variation. After your experiment has run for a sufficient amount of time, you’ll be able to determine a winner.
Step 4: Determine the Winner and Repeat as Needed
Once you determine a winner, then you can direct all traffic to the winning page. Now you can be confident that you’ll be getting the better conversion rate for your goals. At this point you can leave it alone, or try another change on the page. The beauty of incremental design using A/B split testing is that you can constantly be working towards better conversions. The result will never take you backward. If you try another split test, and your new “B” page does not perform better than your “A”, then you keep the existing “A” page. And when a new “B” page out-performs your “A” page, then it takes over as your new “A” page for the next test.
I recently worked with a client on a split test for the highest traffic page on their website (it gets more traffic than the homepage). The problem with the page was that it also had a high bounce rate. So we knew it was effective in getting people TO the website, but not with KEEPING them there. We reviewed the page and rebuilt it with a cleaner design and a nice call-to-action at the top of the page to encourage click-through to another page for more information (lowering that bounce rate). With the newer, much fancier design, we were certain the new variation would be a big hit with visitors.
Much to our surprise, the split test experiment showed that the original not only out-performed our awesome new design, but it beat it pretty decisively. That was a great reminder for me that I should never base design on feelings or personal preferences. Data shows the real impacts.
Use the data available to you effectively, and you’ll reap the rewards of a high-performing website.
Note: This post was originally published on the MainWP Blog.
0 Comments