If I hear another person talk about "learnings" from their landing page test, I'll have to scream.
We all want to create "meaning" and see the larger patterns in our tests that can apply to other circumstances. This is natural and can serve as a shortcut that can be used instead of thinking in the future, and applied to similar situations. But the world is very complex, and trying to generalize universal truths from a single landing page test result is often a horrible idea.
Unless you're running a very simple head-to-head split test, and you change only a single tactical and self-contained page element, such as a headline or button color, any "learnings" are likely to be very dubious indeed.
The reason for this is known in science as "false causality." As a common scientific saying goes, "Correlation does not imply causation."
The phrase above doesn't use the word "imply" in its common sense (i.e., to suggest). The scientific sense of implies (taken from formal logic) can be better translated as requires.
The phrase refers to a common error that people make. They assume that because two effects are related or occur together, one causes the other. This isn't necessarily the case. There may be a third, previously unrecognized, lurking variable (also called a confounding variable, or confounding factor) that causes the other two.
For example, if I told you that half of auto accidents occur within five miles of your home, you might conclude that there's something inherently dangerous about driving in your neighborhood. You might theorize (as some have done) that perhaps you're more absent-minded, hurried, or on autopilot during short errands and trips near your home.
This seems reasonable on its face. However, it's dead wrong.
What If I told you that half of our driving occurs within five miles of the home? Now a very different picture emerges. The accident rate per mile is the same -- we just happen to be near our homes more of the time.
The confounding variable of miles-driven was ignored, and led to an erroneous conclusion that driving close to home was inherently more risky. We then used this false conclusion to generate all kinds of spurious explanations for the observation.
In landing page optimization, many people also insist on extracting so-called "learnings" from their test results. Hindsight is used to rationalize why a particular landing page version had a higher conversion rate.
For example, I may test two call-to-action buttons: orange and green. If the green one performs better, I may be tempted to conclude that my audience likes the color green more than orange.
However, there may be another explanation: the contrast of the button color with the main color theme of the page. If my page were predominantly orange-themed, the orange call-to-action button would seem muted and may get lost in a scene composed of similar colors. The green button color may stick out and seem more prominent because of its contrast, and not the actual color used.
Trying to rationalize results after the test is a dangerous activity, because it may cause you to inappropriately fixate on elements of your design that had nothing to do with the performance improvement.
Try to restrain yourself from engaging in this kind of after-the-fact myth construction. Instead, focus on coming up with diverse and interesting ideas for every one of your landing page tests.
Retesting things that you already "know" is definitely in order. Otherwise, you're simply working based on assumptions. And as they say: When you "assume" you make an "ass" of "u" and "me."
Keep an open "beginner's mind" and continue to see each test as a new experience.
At SES London (9-11 Feb) you'll get an overview of the latest tools, tips, and tactics in Paid, Owned, Earned, Integrated Media and Business Intelligence to streamline your marketing campaigns in 2015. Register by 31 October to take advantage of Early Bird Rates.