A/B testing is one of the highest-leverage activities an e-commerce marketing team can engage in to drive incremental revenue increases without requiring a bigger marketing budget.
Whether you are a seasoned tester or new to the world of A/B testing, this guide will help demystify the process of implementing a testing strategy for your brand.
In this guide, you will learn how to generate hypotheses, discover practical testing ideas, and explore inspiring case studies.
Dive in and begin exploring the world of A/B testing.
You may think that A/B testing is a fairly new method that was born with the rise of the internet, but it actually dates back almost 100 years. In its most basic form, A/B testing simply compares two versions of the same thing and gathers information on what performs better.
Here’s a one minute history lesson that won’t bore you. Scientist Ronald Fisher used to conduct such tests in the 1920’s. Back in the day, he'd sprinkle a bit more fertilizer on the soil just to see how things would grow. That's when he first laid down the foundations of A/B testing, and you know what? Those principles still hold strong today.
Later in the 1960’s and 1970’s, the same concept was applied to marketing campaigns. They would test different variants of postcards and letters to see which resulted in more sales.
Since the 1990s, the fundamental concepts have remained unchanged, but now we conduct these tests in a real-time digital environment and with a significantly larger number of participants.
A/B testing, also known as split testing, basically replaces guesswork with data-driven results to best optimize a webpage. It's a cornerstone of Conversion Rate Optimization (CRO), gathering various qualitative and quantitative insights.
E-commerce businesses encounter numerous obstacles that directly affect their profitability. These hurdles include high shopping cart abandonment rates, unclear messaging, unfriendly user interfaces, hidden or hard-to-find call-to-action buttons, and other pain points.
This is how A/B testing can be a game-changer.
One of the most significant challenges that e-commerce owners often wrestle with is shopping cart abandonment, a problem they constantly try to solve. You may think it’s just you, but it’s not.
In fact, recent data from Statista and the Baymard Institute revealed that the average global shopping cart abandonment skyrocketed to 70.19% in 2023, a new high since 2013.
More specifically, online furniture stores have the highest shopping cart abandonment rate, at a whopping 84% for desktop, 89% for mobile and 90% for tablet users.
While a customer may change their mind and opt not to check out, it's evident that a 70% abandonment rate isn't merely due to mistakes.
Proper A/B testing can help seal these leaks, which can result in more sales than ever. This can be done by testing out free shipping, different payment options, describing the value of the product, reducing form fills and improving the user experience.
The cost of acquiring a customer has never been higher - and in 2024, it’s going to be e-commerce’s toughest challenge. In a recent study by SimplicityDX it was found that the average customer acquisition cost (CAC) was $29 per customer. Just to compare, the average CAC in 2013 was $9.
CAC is one of the most important e-commerce metrics to follow. Thoroughly monitoring and maintaining a healthy balance between your Customer Acquisition Cost (CAC) and Customer Lifetime Value (LTV) metrics offers invaluable insights into the overall sustainability of your marketing efforts. Luckily, A/B testing can reduce your customer acquisition costs.
A/B testing serves as a reliable compass, guiding your decisions with confidence.
Whether you’re refining a new homepage layout, targeting specific products to specific audiences, or optimizing your check-out page - A/B testing is here for you to not rely on gut feelings and make fast decisions based on what the actual data says and no one else’s opinions.
Your bounce rate is the percentage of visitors who visit one page of your e-commerce website and don’t take action, eventually leading them to leave the site. A high bounce rate indicates that visitors quickly leave your site without engaging further, suggesting that your site has low performance.
With a good website analysis and strong hypotheses set in place, Uselutions actually succeeded in reducing RapidUserTest’s bounce rate by 31%, which ultimately led to an increase of 47% in conversions.
Getting a better grasp on what customers actually like will help you structure higher converting product pages, blog posts, videos and other e-commerce assets. The more A/B tests you run, the more data you have on the most engaging page elements and structure.
There are many examples (we’ll get to them shortly) where even the smallest tweaks between variants, like a sticky vs. non-sticky navigation bar or a solid green background vs. a green gradient that goes into blue - increased conversions by 2% and more!
When it comes to methodologies and frameworks in A/B testing, there are two main approaches that the industry uses to interpret data statistically, Frequentist and Bayesian. Each approach has its own unique value and helps enable A/B testing tools to statistically calculate the collected data.
Let’s imagine you own an online fashion boutique and you want to test between two different variations of your homepage, normally to measure which variant converts more. If you were to take the Frequentist approach, you would take into account the data only from the current experiment.
In this approach to A/B testing, you rely on p-values and confidence intervals to determine the likelihood of your results occurring by chance. It is similar to flipping a coin to see if you get heads significantly more often than tails. If the difference is significant, you come to the conclusion that one version of your webpage is better than the other.
In Frequentist A/B testing, probability is interpreted as frequency. This means that the parameters being tested are not viewed as random variables; instead, they are considered fixed. For example, if we flip a coin ten times and get heads seven of those times, the frequentist probability of the coin landing again on heads is going to be 70%, or 7/10.
Assume you're running an electronics e-commerce business and want to experiment with alternative product page layouts to boost add-to-cart rates. Bayesian A/B testing provides a more refined approach. Instead of relying just on statistical significance, Bayesian analysis allows you to include prior probability and continually update your beliefs as new evidence becomes available.
Chris Stucchio provides a great example of incorporating a Bayesian statistical approach into his conversion rates for a hypothetical startup, BeerBnB. He marketed the brand in bars (by placing ads in bathrooms) which brought in 794 unique visitors. Out of 794 visitors, 12 created an account, resulting in a 1.5% conversion rate.
The Bayesian statistical approach based on this prior knowledge would conclude that, based on this evidence, if BeerBnB reaches 10,000 visitors, 150 sign ups would be expected. This approach is far more used in marketing, as its results are easier to interpret as opposed to Frequentist methodologies - which even statisticians seem to misinterpret based on many studies.
Like any scientific experiment, it all starts with a hypothesis. In A/B testing, a hypothesis isn’t a question but rather a clear and measurable idea that answers questions. You really need to ask yourself, “What change can I make that will have a positive influence on my customers behavior, and what KPI will it impact?”
While there’s no bad hypothesis because we always learn something new with each test, a better thought-out one can definitely open the door to more sales. So how do you formulate a great evidence based hypothesis?
These components make up the process of successful A/B testing:
Finding something to test isn’t complicated. The challenge is to decide which idea (among the many) is worth testing first. In order to identify a good hypothesis, you’ll need to observe your entire customer journey.
Find an existing webpage or element that you want to test. The original (variant A) is also known as "control,” and the alternate variation (variant B) is “treatment.”
Figure out points where there are sudden drops in traffic. This most likely indicates that there’s a problem that needs to be addressed. Collect past data from analytics, conduct customer surveys and use tools like heatmaps to observe their behavior to help support your hypothesis.
Keep in mind that the hypothesis doesn’t necessarily need to quantify the extent of improvement; it should instead focus on determining the direction of improvement.
Here’s how a hypothesis should be formulated:
“If we make change X, it will improve the member experience in a way that makes metric Y improve.”
The optimizing masters over at Netflix were planning to test their top “10 Lists” and their hypothesis read:
“Showing members the Top 10 experience will help them find something to watch, increasing member joy and satisfaction.”
In order to initiate an A/B test, you’ll need to create two distinct variants. It’s advised that you label your variants more descriptively so they can be easier to review and analyze later. For instance, instead of labeling “Variant A,” you can write something more specific like “Variant A - Promotional CTA.”
A/B tests can be deployed to either an entire audience or a specific subset. For instance, if applied to a website, serving the entire audience entails having half of the visitors interact with the original site and the other half experience the new version under test. Alternatively, you could opt to expose only 25% of your audience to the new variant or selectively target visitors from California, where half would view the original and the rest the new version.
Choosing the appropriate audience hinges on the demographic or segment you believe your hypothesis is relevant to, as well as your urgency in collecting sufficient data to draw conclusions.
In order for your A/B test to collect enough data, it’s advised that it run for at least two weeks during the experimenting phase. That should give you plenty of time to ride out any fluctuations, especially since customers may seem to behave differently during the weekends.
Running the A/B test for at least two weeks gives you ample time to consider various factors: "I need to think about it" buyers, diverse traffic sources like Facebook, Google, organic searches, newsletters, and unexpected traffic surges, such as those from a Friday newsletter.
It’s important to know that you shouldn’t run multiple A/B tests of the same webpage or the same audience at the same time because it will mess up your test results.
Looking for a statistically significant result is key when we’re analyzing A/B test results.
Think of it like a formula that helps us figure out if a result is trustworthy and not just a lucky coincidence or the outcome of testing with a small sample size.
If you have a small audience, you’ll need a large difference in performance to indicate statistical significance and draw a conclusion. On the other hand, if you have a large sample audience, even the slightest change in performance can have enough statistical significance.
Tools like Google Optimize or our A/B testing tool can calculate the statistical significance for you, so you can just focus on sharing the results with your team and implementing the appropriate changes.
In e-commerce, A/B testing focuses on spotting pain points that create a less-than-ideal user experience, which can ultimately lead to lower sales. These friction points can stem from various elements across your website.
Make sure to take the following elements into consideration when planning to run a test for your e-commerce store:
The cool thing about A/B testing is that it lets you tackle each of these issues step by step. It’s recommended to test one element at a time. If you make too many changes to your variation, you won’t be able to pinpoint which change makes the greatest impact.
And when you listen to what your customers have to say, you can make changes that not only fix problems but also make shopping more awesome for everyone, keeping them coming back for more.
Here are 6 e-commerce testing ideas to help inspire you on what you can test on your e-commerce store. Remember, testing variations of your website elements like images, call-to-action buttons, and navigation can uncover valuable insights into what resonates best with your audience. So don't hesitate to explore different avenues and measure the impact of each change on your conversion rates.
A recent study by Baymard found that the number one reason for shopping cart abandonment was high extra costs like shipping and tax. Which opens the following question: Is it the number one driver of sales?
Sometimes a free shipping option isn’t profitable for e-commerce owners. Here are some ways you can still incorporate free shipping:
Online furniture store All Modern lets customers know once they land on their homepage that they offer free shipping at the beginning of their buying journey, so they can reduce shopping cart abandonment.
The first thing that people see when they land on your website is your homepage banner. And according to The Good, carousels aren’t as interesting to users as you may think. In fact, they found that only 1% of users actually click on carousel images.
Allbirds, an online shoe and clothing retailer, nails it with their hero image. They keep it clean and straightforward, with minimal text and a simple image featuring their shoes in a natural setting. It's a prime example of how simplicity can catch the eye effortlessly.
When it comes to A/B testing CTA buttons, you've got a bunch of options to play around with, like:
Your CTA button needs to be super easy to spot. It's crucial to put it in just the right spot where customers can quickly figure out what to do next after clicking on it. The CTA button should really pop and catch your eye, blending seamlessly with the overall website design while still standing out.
Positioning a "Buy Now" or "Add to Cart" button on the right side of your product page, above the fold where it's immediately visible, can greatly increase the number of users moving down your sales pipeline.
Well-written copy can help communicate and persuade your online shoppers to take the next step. Your headline is written in the biggest font on your homepage and landing pages, so you can be sure that most, if not all, of your visitors will read it.
In the following A/B test, you can see how different angles in the headline and tagline copy can result in a massive difference. In the control (variant A), the copy emphasized ‘no hidden fees’ while the treatment (variant B) emphasized a free trial and the amount of time it’ll take them to sign up.
Variant B outperformed variant A by 30%. The winning copy reduced uncertainty and informed the visitors enough to take action.
The Control - Variant A:
The Treatment - Variant B (winner):
Make sure your checkout page is a breeze for your customers, so they can complete their purchase quickly and easily. In a recent study for Shopify, it was found that customers converted 7.5% more on single page checkout.
So, why not give it a try? Test out single-page versus multi-page checkout screens and see which one keeps more customers going through to the end. And don't forget, whether it's one page or multiple, to keep those form labels clear and steer clear of any distracting links that might throw your customers off course.
When it comes to optimizing your website, don't overlook navigation—it's key to a great user experience. Make sure your site's structure is clear, with pages linking logically to one another.
Start with the homepage—it's where all roads lead. Keep your navigation simple so visitors can easily find what they need without getting lost. Each click should take them where they want to go.
Here are a few tips to enhance your navigation:
By fine-tuning your website's navigation, you'll not only boost conversions but also leave visitors wanting to return for more.
A/B testing is a powerful tool, but it does require time, resources, and enough traffic to bring in meaningful results. That means you'll want to focus on tests with the highest potential and biggest impact.
But with so many ideas, how do you decide which ones to tackle first—or if they're worth pursuing at all? There are several prioritization frameworks available, each with its own advantages and disadvantages:
PIE, pioneered by Chris Goward, stands as one of the most widely used frameworks in A/B testing.
The acronyms represent three areas in which to rate your ideas:
Another well-known framework is the ICE Score, pioneered by Sean Ellis. This approach breaks down into three key factors for evaluating an idea:
Chris Goward unveiled the LIFT (Landing Page Influence Function for Tests) Model in 2009, offering a structured approach to analyzing web and mobile experiences and formulating A/B test hypotheses.
LIFT focuses on six important conversion factors:
Each of these elements has the potential to either boost your conversion rate or, if not optimized correctly, decrease it.
Built by CXL on the idea of a binary scale: you must select one or the other. So, for most variables (unless otherwise specified), you can select between a 0 and a 1.
It also weighs various factors based on their importance: how noticeable the change is, whether an element is added or removed, and the ease of implementation. So for these variables, we specify how things change. For example, on the Noticeability of Change variable, you can label it as 2 or 0.
The ease aspect of the framework - potential, significance, and ease - is assigned a number ranging from 1 to 10 and utilized to compute the total PIE score.
There are 4 most common methods of testing, which we’ll describe below:
Genuine A/B testing assesses the effectiveness of individual digital components, like a CTA button or a color scheme. The A group represents the current presentation of the element on your website or app, while the B group introduces a change that you're testing.
A common misconception is that split testing is another word for traditional A/B testing. Split URL testing involves experimenting with a new version of an existing web page URL to determine its performance compared to the original.
While A/B testing is ideal for testing minor front-end tweaks, Split testing is reserved for larger-scale changes, especially in design, to existing websites.
Multivariate Testing (MVT) is an experimental method where various combinations of page variables are tested simultaneously to identify the most effective combination. It's a more advanced technique than standard A/B testing and is typically handled by experienced marketing, product, and development professionals.
When executed properly, Multivariate Testing can eliminate the necessity for multiple sequential A/B tests with similar objectives on a webpage. Testing multiple variants concurrently enables you to save time, money, and effort while achieving results more efficiently.
Multipage Testing involves experimenting with modifications across multiple pages.
There are two approaches to Multipage Testing. First, you can replicate all your sales funnel pages to create an alternate version and then compare its performance with the original, known as Funnel Multipage Testing.
Alternatively, you can examine how the addition or removal of recurring elements, such as security badges or testimonials, impacts conversions across the entire funnel. This is known as Classical or Conventional Multipage Testing.
You need to track and measure the metrics that are best aligned with your e-commerce business goals. For instance, an e-commerce platform might conduct an A/B test to reduce cart abandonment, while a software company could experiment with different call-to-action (CTA) button designs on a landing page to increase free sign-ups. The key performance indicators (KPIs) for monitoring A/B testing outcomes in each case would vary.
Here are the most important metrics to analyze when A/B testing:
Click-through rate (CTR) is the ratio of clicks on a particular link to the total number of times the link is displayed (also known as impressions). This metric helps in assessing the effectiveness of clickable website elements such as CTA buttons and navigation links in engaging your target audience.
You can calculate CTA with the following formula:
CTR = (Clicks / Impressions) x 100
Bounce rate indicates the percentage of visitors who enter your website but leave without taking further action, such as clicking a link. These occurrences are referred to as single-page sessions.
A high bounce rate can suggest low visitor engagement and may highlight issues with website design or content. This insight helps you better understand the effectiveness of both your experiment control and variant. You can see this metric in Google Analytics.
You can calculate your bounce rate with this formula:
Website bounce rate = Single-page sessions / Total sessions
The conversion rate is the proportion of users who complete a desired action, or "convert," on your website. This action could include clicking on a specific link, signing up for a newsletter, or making a purchase. It's a fundamental metric for assessing the success of A/B tests, as it indicates how effectively your variations drive user engagement and fulfill your objectives.
Here’s how you can calculate it:
Conversion rate = (Number of conversions / Total number of visitors) x 100
Scroll depth is a metric that tracks how far down a web page a user scrolls, uncovering the most engaging sections and where users tend to drop off. By analyzing scroll depth data, you can make informed decisions to enhance user engagement and conversions.
This may involve optimizing content and design elements, which can be tested through an A/B test to gauge their effectiveness.
The abandonment rate signifies the percentage of tasks initiated by users but left incomplete, like exiting a survey halfway through or adding items to an online shopping cart without making a purchase. This metric is particularly significant in the e-commerce sector, often used to calculate cart abandonment rates.
How to calculate cart abandonment rate for ecommerce stores:
Cart abandonment rate = (Number of carts abandoned / Number of orders initiated) x 100
Retention rate refers to the percentage of users who revisit a website or specific page after a specified period.
By comparing retention rates across various A/B test variations, you can determine which version prompts users to return and engage further. This data is valuable for optimizing your website to foster greater customer loyalty and ensure long-term success.
How to calculate retention rate:
Retention rate = (Number of users who return to the page or website within a time period / Total number of users who visited the page or website during the same time period) x 100
Session duration refers to the length of time a user spends on a website during a single visit. It encompasses each user's session from the moment they enter the site until they exit or become inactive.
A longer session duration often suggests that users find the website informative or enjoyable, which can have a positive effect on conversions and overall user satisfaction.
Here’s how to calculate it:
Average session duration = Total session duration / Total sessions
Average order value (AOV) represents the average amount a customer spends on a single purchase on a website. This metric is crucial for assessing the impact of an A/B test variant, particularly for e-commerce brands. It indicates whether website changes have resulted in a positive or negative impact on the amount customers spend.
Calculate it using this formula:
AOV = Total revenue / Total number of orders
Revenue serves as the primary metric for most A/B tests, serving as the ultimate measure of your hypothesis's impact on your bottom line. It complements other metrics like conversion rate, average order value (AOV), and abandonment rate.
Prioritizing revenue allows you to gauge whether your changes genuinely benefit your business, guiding your decisions on whether to continue with the current approach or explore new A/B testing strategies.
During an A/B test, track revenue by focusing on sub-metrics such as revenue per visitor, revenue per customer, lifetime value (LTV), and conversion value.
You can calculate revenue by multiplying the number of orders by the average order value.
The statistical significance level, also known as confidence or significance of the results, reflects the statistical significance of your findings. For digital marketers, ensuring confidence in the results is crucial.
This metric indicates that the observed differences between a variation and a control are not merely due to chance. It provides assurance that the outcomes of your A/B test are reliable and meaningful.
Even if a test reaches 95% confidence, it's wise to continue running it to avoid settling for a potentially false improvement.
Before concluding an A/B test, ensure the following four conditions are met:
Shogun calculates the statistical significance for you with the statistical method Chi-Squared (χ²). A Chi-Squared test is particularly well-suited for testing where outcomes are categorical in nature, such as clicks or conversions. It is also robust and doesn't require assumptions about the underlying distribution of data.
As with all experiments, there are a lot of things that can go wrong. Here are 7 mistakes to avoid when A/B testing.
Before diving into an A/B test, it's crucial to establish a well-defined hypothesis and anticipate the expected outcome. This ensures you can accurately interpret the results and make informed decisions.
Take note of seasonal variations, such as holiday seasons or economic situations, that may influence metrics like conversion rates and average order value.
While quantitative data is crucial, it's not the only factor to consider. Integrating quantitative analysis with qualitative feedback provides a more comprehensive understanding of your website's performance.
Make sure you've got enough data and give the test plenty of time to do its thing before jumping to conclusions. Cutting it short might give you the wrong idea.
Pay attention to primary metrics like conversion rates, but don't forget about secondary metrics that reveal important insights into user behavior. Taking a closer look at these metrics is key to optimizing your website effectively.
Segment A/B test results based on user behavior or demographic details to ensure accurate conclusions. Segmentation unveils distinct patterns within specific groups, facilitating targeted optimization strategies.
There are many tools for A/B testing, and choosing the right one can be confusing. Our A/B testing tool takes the guesswork and complex calculations off your hands, serving up everything you need on a silver platter, ready for seamless implementation.
This tool is perfect for:
It’s no secret that Google’s ranking algorithm measures websites based on user experience. By constantly improving your user experience with A/B, you’ll improve your SEO strategy and start ranking higher on Google.
While there are potential long-term benefits, incorrect implementation of A/B testing can also lead to negative outcomes. These negative outcomes usually occur when A/B testing is done the wrong way and can drastically damage your website’s SEO.
Here are some risks that can damage your SEO to take into consideration:
Google’s algorithm heavily penalizes websites with duplicate content. For A/B tests, publishing variations of similar page content can trigger duplicate content warnings from search engines. If the content doesn't get much search engine traffic, duplicate content may not affect SEO. But for important pages, use a canonical URL to specify the primary version for search results.
Google penalizes websites for "cloaking," lowering their rankings if detected. Cloaking involves showing different content to users and search engines to manipulate rankings and mislead users. Therefore, hiding A/B testing pages from search engines is not recommended to maintain consistency and avoid cloaking.
Slow-loading pages harm the user experience and are ranked lower by Google. A/B testing tools may further slow down loading times. To mitigate this, optimize page load speed, such as by deploying A/B testing on the server side.
Assigning the correct redirect code is crucial in A/B testing. Use a temporary 302 redirect for A/B variations to inform Google which page to index and which not to. Avoid a 301 redirect, as it indicates a permanent redirect, transferring all ranking power to a new URL, which is undesirable.
Before diving into your own A/B test, let's explore some real-life case examples. Here are a few of our favorite A/B test examples:
Online sunglasses retailer Christopher Cloos saw in increase of 15.38% conversions after switching up the popup experience for its users.
Online supplement brand Obvi saw high shopping cart abandonment. In order to tackle that and provide a better experience, they conducted an A/B test where variant B offered a popup in the checkout process in which they offered an additional 10% discount and a countdown timer to add FOMO.
The result? A whopping 25.17% increase in conversions.
Beckett Simonon is a leather shoe e-commerce brand that shows us the power of incorporating storytelling - where you least expect it!
Just by adding a “storytelling” panel, they highlighted their craftsmanship, and increased their conversions by 5%.
L’Axelle, a sweat reduction product manufacturer, achieved a staggering 93% increase in conversions by implementing compelling action-oriented copy on their landing page!
Food delivery brand, Guosto, increased their post-order sales with a simple tweak to their order confirmation page design.
By replacing the traditional "you're all done" confirmation with a step-by-step screen, customers perceived additional steps in the buying process.
This perception led to a remarkable 20% increase in product additions to their cart after completing their initial purchase.
Metals4U saw a remarkable 34% surge in conversion rates by strategically showcasing their delivery details on their ecommerce platform.
Their emphasis on delivery times and fast service significantly influenced customer buying decisions, resulting in a substantial increase in conversions.
A/B testing stands as one of the most potent and efficient methods to propel e-commerce growth. By moving beyond intuition and gaining real insights into customer preferences, A/B testing ensures you're never left guessing about what drives conversions.
After an A/B it’s advisable to focus on the following:
If one version significantly outperformed the other, woohoo! It's time to implement those changes and start reaping the rewards for your business.
But don't stop there; continuous testing and evolution are key. Keep refining your website to keep customers engaged and coming back for more. Implement your learnings across your storefront and use the advantages of Shogun’s storefront design tool.
The storefront design tools allow you to easily replicate and save snippets of winning designs and place them across more page types to create an even bigger impact.
The last thing you should be doing is to stop testing. Regular testing is paramount in A/B testing. Consumer preferences, market trends, and competition dynamics are constantly evolving.
What worked well last month may not be as effective this month. Regular testing ensures you stay ahead of these changes and make necessary adjustments to your ecommerce strategy.
When a team sticks to the status quo without curiosity or room for improvement, their output is unlikely to be exceptional. This often reflects the broader organizational culture, where the way things are done matters more than you might realize.
Try to host a day to meet up with your team and discuss your findings and areas that can be improved. You can call it “Learning Fridays.”
With this in-depth guide with examples on A/B, you’re fully equipped to start planning out your next A/B tests. The good thing is that when you start A/B testing, you’ll automatically be ahead of 99% of your competition. Why? Because they think A/B testing is complicated.
While you could dive into Google Analytics, tinker with free online A/B testing tools, and brush up on statistics, why not make life easier? With Shogun's A/B testing tool, everything's done for you! It's like having your own personal testing wizard, saving you trial and error.
If you found this guide helpful, spread the word and assist fellow optimization enthusiasts in A/B testing without falling for the same mistakes.