Yesterday I told you about Jim Yaghi.
The computer scientist and lead generation master.
Remember that?
Well, today, with his permission, I’m reprinting an email he wrote about why split testing is (GASP!) a waste of time.
Yes, I know this goes against the grain.
But, most truth does.
In fact, I’ve been doing what he says (below) and my opt-in conversions are overall higher.
Definitely worth seeing if you get the same results.
So, without further ado…
Take it away, Jim:
Last night, I went to bed with the SPLITTING-EST of headaches.
Was gonna pass out; even after 3 pills of the strongest kind, and Voltarin cream on neck and head…the pain got the better of me.
Lack of sleep is the culprit – but like I always say – plenty of time to rest when I’m dead.
Speaking of SPLITTING…
How about what I said a while ago?
“Ben and I both agree that split-testing is a waste of time”
I knew someone would challenge me on it…and it was an Engineer client of ours who ended up asking, “What do you mean it’s a waste of time?”
Oddly enough, I was on a call with Ben Settle at the exact moment the email came in, discussing that exact topic.
When I told Ben YEARS ago that split-testing is stupid…he was shocked!
“Jim,” he said. “You are a SCIENTIST! Don’t you even have a Math degree or something? I thought you’d be the ONE guy who’d RELIGIOUSLY split-test everything.”
Yes I do have a math degree somewhere. But Nope, I split-test NOTHING.
Skeptical as Ben was, when I told him split-testing was stupid, he figured I knew what I was talking about and gave MY method a try.
Today, he and I laugh (all the way to the bank) at people who split-test.
Split-testing is an attempt to make SCIENCE out of marketing…which, let’s be honest, marketing is more a TRADE than a science. Or if you’re me – Handsome and Awesome – then it’s more of an ART than science.
Engineers and people from science backgrounds, they tend to enjoy the idea that marketing could somehow become scientific and predictable…although it NEVER really is.
It’s virtually random. Without order. It’s all grounded in PEOPLE’s EMOTIONS – no logical thing can describe or model it.
– Does this red button work better than this green button?
– Does this headline perform better than this other?
– Does this landing page convert higher than the old one?
All good questions.
But running the split-test itself could be DANGEROUS.
Not only do you risk killing a money-making version of your ad…
You also risk keeping the LOSER version instead of the WINNER versions over time, which will progressively DROP your sales rate.
It’s true.
Look, the whole concept of split-testing relies on statistics, which is the science of APPROXIMATING the FUTURE from OLD DATA.
All statistical tests start on the premise that a sample should on some level represent the population.
Meaning, if you run a statistical experiment in marketing, you’re doing it with the belief that the 1,000 or 10,000 people who see your sales message, behave like, and are a “good-enough” representation, of your ENTIRE market.
You certainly don’t show your ads to the ENTIRE market during a split-test.
And the ENTIRE market does not behave the exact same way as the 1,000 people you test.
And 10,000 people don’t behave exactly the same as 1,000 people in a test.
We know this much.
Sure there could be patterns in each group’s behavior, but it differs from one group to the next.
And theoretically, the bigger the group, the closer you get to describing the entire population of your market.
So split-tests may be MORE meaningful when you’re dealing with a sample size in the order of 10,000 – 100,000 hits/day…but that’s rarely when marketers are using split-tests.
One thing we can be certain of, is that SOME of your market will respond to Version 1 of your ad while others will respond to Version 2 of your ad.
And those who respond to Version 1 may not ever respond to Version 2 and vice versa.
This in itself is a good enough reason to NOT split-test.
Add don’t forget, that when REAL scientists run statistical experiments, they attempt to create a COMPLETELY controlled environment with as close to ZERO bias one way or the other.
Something you can ONLY pull off in a lab, with lab-rats, which we don’t have the luxury of online.
And they use tests like Chi-Square, Kendall’s, Pearson’s, or Spearman’s correlation to understand if the results of one variation against another might somehow be due to PURE chance.
(Meaning, they ALSO test HOW WRONG THEY MIGHT BE in reaching their conclusion!)
It’s so complicated; it’s enough to give YOU and ME both a splitting headache.
Let’s put it this way:
Split-testing is a dangerous tool in untrained hands.
Plus there’s a much simpler alternative that works better.
Whether your ad is Version 1 or Version 2, or V.3 or V.10, all you REALLY care about is that your ad is making money (or generating leads).
So if you run two versions and BOTH succeed in doing that (eg v1 gets 10% optin and v2 gets 30% optin), DON’T kill v1!
If you kill v1 because it scored worse than v2, you’re assuming that ONE ad is going to be responded to by 100% of your market.
That’s not true is it?
V1 got 10% of your market to respond. So keep it. Unless you want to turn away 10% of the population.
Instead, create a v3 based on v1’s success and run all 3 versions of your ad.
This way, you INCREASE your odds of getting an even LARGER share of your market to respond to your ads…WITHOUT turning away anyone else who behaves like those who responded to v1 or v2!
Hey, now you don’t suppose that’s the ONLY marketing trick I know, do you?
See, the point here is that I know GANGS of marketing tricks that make life easier, less complicated, and make much more money.
You can read about them here:


