Is Organic Traffic Better for A/B Testing than Paid Traffic?
This topic came up from a conversation with a fellow product manager. I tried my best to answer the question but found it quite unsatisfactory so here I am putting my thoughts into writing in another attempt to answer this question better.
My initial reply to this question was as follows (verbatim):
"🤔 Assuming that 4 out of 5 A/B tests end up with a worse or inconclusive outcome and that organic traffic and paid traffic have the same conversion rate. By only limiting the A/B test to organic traffic, you’re only incurring opportunity cost while you’re incurring opportunity cost + marketing spend for paid traffic."
After putting more thought into it, I find the question quite complicated to answer with a simple paragraph so I will try to break things down further and provide more concrete evidence for my assumptions and conclusions.
Assume that 4 out of 5 A/B tests end up with worse or inconclusive outcomes. This is a fair assumption as multiple sources and industry experts all testify on the difficulty in conducting scientific A/B tests and how uninspiring the success rates are.
Some relevant articles:
- “We Have a 57% Failure Rate” – A/B Testing at Ladder
- 10 things I learned after 300+ A/B tests – UX Planet
Assume that organic traffic and paid traffic have the same conversion rate. This is a fairly valid assumption as conversion rates of related paid/organic channels can be very near each other (SEM vs SEO). Personal anecdotal experiences point to organic traffic always getting slightly higher conversion rates than paid traffic. Another assumption that needs to be established here is that learnings from the organic traffic segment are easily transferrable to the paid segment.
By only limiting the A/B test to organic traffic, you’re only incurring opportunity cost while you’re incurring opportunity cost + marketing spend for paid traffic. There are plenty of ways to segment traffic by source and conduct tests only on the organic segment – this part shouldn't be a problem. With only incurring opportunity cost for organic traffic, the cost for you is if the amount of revenue that could have been earned if you were to skip the experiment altogether assuming the test failed. On the other hand, if you were to conduct the experiment on the paid segment and if the experiment were to fail, you not only miss out on the potential revenue, but also need to incur marketing expenses.
A little bit of number crunching can reveal more insights into this problem. First, we establish a baseline from historical data:
Next, we use 10% of the total traffic to conduct our A/B test (simulating no A/B test, organic only A/B test, paid only A/B test, and mixed traffic A/B test) and derive insights from 3 scenarios (test failed, test succeeded, and test inconclusive).
With the specific steps broken down, it seems like organic traffic will always be the best in terms of Return on Ad Spend (ROAS) and Revenue - Ad Spend.
I'm not sure if I missed any critical detail here. Please do reach out and let me know if I made any wrong assumptions or calculations.