Here's how you can improve your startup's conversion rate through A/B testing.
We've consolidated learnings from running 1000s of A/B tests for companies.
A/B testing is the science of testing changes to see if they improve conversion.
This post will cover:
- Deciding what to A/B test: Are you testing new header copy or a new CTA?
- Prioritizing tests: Which tests should you run first?
- Tracking results: Metrics to prioritize.
Testing is key: We've worked with numerous companies with low conversion rates and found it took at least three months of A/B testing before they got traction.
A/B testing isn't about striving for perfection with each variant. It's about iteration.
Constant improvement: Every day of the year, a test should be running—or you're letting traffic go to waste.
The A/B testing process is defined as showing two variants to two separate groups of people at the same time and seeing which performs better. An overview of the A/B testing process:
- Decide on and prioritize high-leverage changes 💡
- Show some percentage of your visitors the change 🧪
- Run it until you reach a statistically significant sample size 🔎
- Implement changes that improve conversion 📈
- Log your test design and results to inform future tests 🖋
Deciding what to A/B test
Sourcing different ideas can be difficult so here a few ways to source test ideas:
- Survey users. Ask what they love about your product
- Use screen recording tools like Hotjar or FullStory to find engagement patterns: What are they clicking vs ignoring?
- Your best ads have value props, text, and imagery that can be repurposed for A/B tests.
Additional sourcing ideas:
- Mine competitors' sites for inspiration. Do they structure their content differently? Do they talk to visitors differently?
- If you're a solo founder, you likely interact with customers and know best what appeals to them.
- Revisit past A/B tests for new ideas.
Prioritizing tests
There are two important concepts to understand about A/B tests:
- Micro variants are small, quick changes:
- Changing a CTA button color
- Changing one line of copy
- Testing a new image on a product page
- Macro variants are significant changes:
- Completely rewriting your landing page
- Changing the value props that you lead with
- Overhauling the theme of your website
Prioritize macro changes because they often result in large conversion swings.
Additionally, perform A/B testing on earlier parts of your funnel. For two reasons:
- They have larger sample sizes — and you need a sufficient sample size to finish a test. E.g. you'll get thousands of impressions on an Instagram ad, but way fewer people see your checkout page.
- It's easier to change ads, pages, and emails than it is down-funnel assets like in-product experience.
Other prioritization questions you can ask yourself:
- How confident are you the test will succeed?
- If a test succeeds, will it significantly increase conversion?
- How easy it is to implement?
- Is your test similar to an old test that failed?
Start with low effort, high-leverage changes.
Tracking results
Here are 2 keys to setting up tests correctly:
- Run one A/B at a time. Otherwise, visitors can criss-cross through multiple tests when changing devices (e.g. mobile to desktop) across sessions.
- Run A/B variants in parallel. Otherwise, the varying traffic sources will invalidate your results.
You can use Google Optimize to run tests. It's a free A/B testing tool that integrates with Google Analytics and Ads.
To statistically validate tests, you need:
- 1,000+ visits to validate a 6.3%+ conversion increase
- 10,000+ visits to validate a 2%+ increase
Without lots of traffic, focus on macro > micro variants. Macros can produce 10-20%+ improvements vs micros 1-5% increases.
One note on sample sizes and revenue:
The closer an experiment's conversion objective is to revenue, the more worthwhile it may be to confirm small conversion boosts.
e.g. a 2% improvement in purchase conversion is more impactful than a 2% improvement in "learn more" CTA clicks.
To track tests, mark the following in a task management tool (like ClickUp):
- Conversion target you're optimizing for: Clicks, views, etc.
- Before & after: Screenshots and descriptions of what's being tested.
- Reasoning: Why is this test worth running? Use your prioritization framework here.
When each test is finished, make note of:
- Duration: Start and end dates of the campaign
- Sample size: How many people did you reach
- Results: The change in conversion, and whether the result was neutral, success, or failure
If it was a success, note whether the variant was implemented.
Then ask: What can we learn from the test?
- Use heatmaps to figure out why your variant won. e.g. maybe users were distracted by a misleading CTA in one variant;
- Survey customers who were impacted by the test. e.g. why did they prefer one line of copy over another?
Figuring out why each variant is causing your audience to react will inform future tests.
Takeaways:
- A/B testing is higher-ROI and cheaper than most other marketing initiatives.
- Focus on macro variants that cause significant changes until you run out of bold ideas.
- Diligently track A/B results and reference them when ideating future ones. Learn from your past mistakes.
If you like our posts, we save our best email insights for our newsletter.
Get bi-weekly growth tactics here: https://www.demandcurve.com/newsletter