Landing page optimization is based on statistics, and statistics is based on probability theory. And probability theory is concerned with the study of random events.

But a lot of people might object that the behavior of your landing page visitors isn’t “random.” Your visitors aren’t as simple as the roll of a die. They visit your landing page for a reason, and act (or fail to act) based on their own internal motivations.

So what does probability mean in this context?

Let’s conduct a little thought experiment. Imagine that I’m about to flip a fair coin. It has the potential to land on either heads or tails.

What would you estimate the probability of it coming up heads to be? Fifty percent, right? So would I.

Now imagine that I’ve flipped the coin and covered up the result after catching it in my hand. The process of flipping is now complete, and the coin has taken on one particular state.

Now what would you estimate the probability of it coming up heads to be? Fifty percent again, right? I’d agree because neither of us knows any more than before the coin was flipped.

Now imagine if I peeked at the coin without letting you see it. What would you estimate the probability of it coming up heads to be? Still 50 percent, right?

How about me? I’d no longer agree with you. Having seen the outcome of the flip event, I would declare that the probability of coming up heads is either 0 percent or 100 percent (depending on what I’ve seen).

How can we experience the same event and come to two different conclusions?

Who’s correct? The answer is — both of us. We’re basing our answers on different available information.

Not having seen the outcome of the flip, you must assume that the coin can still come up heads or tails. In effect, for you the coin hasn’t been flipped, but rather remains in a state of pre-flipped potential. I, on the other hand, know more, so my answer is different.

So probability can be viewed as simply taking the best guess given the available information. The more information you have, the more accurate your guess will become.

Let’s look at this in the context of the simplest type of landing page optimization.

Let’s assume that you have a constant flow of visitors to your landing page from a steady and unchanging traffic source. You decide to test two versions of your page design, and split your traffic evenly and randomly between them.

In statistical terminology, you have two stochastic processes (experiences with your landing pages), with their own random variables (visitors drawn from the same population), and their own measurable binary events (either visitors convert or they don’t). The true probability of conversion for each page isn’t known, but must be between zero and one. This true probability of conversion is what we call the conversion rate, and we assume that it’s fixed.

From the law of large numbers we know that when sampling a very large number of visitors, the measured conversion rate will approach the true probability of conversion. From the central limit theorem we also know that the chances of the actual value falling within three standard deviations of your observed mean are very high (99.7 percent), and that the width of the normal distribution will continue to narrow (depending only on the amount of data that you’ve collected).

Basically, measured conversion rates will wander within ever-narrower ranges as they get closer to their true respective conversion rates. By seeing the amount of overlap between the two bell curves representing the normal distributions of the conversion rate, you can determine the likelihood of one version of the page being better than the other.

One of the most common questions in inferential statistics is to see if two samples are really different, or if they could have been drawn from the same underlying population as a result of random chance alone. You can compare the average performance between two groups by using a *t*-test computation. In landing page testing, this kind of analysis would allow you to compare the difference in conversion rate between two versions of your site design.

Let’s suppose that your new version had a higher conversion rate than the original. The *t*-test would tell you if this difference was likely due to random chance or if the two were actually different.

There is a whole family of related *t*-test formulas based on the circumstances. The appropriate one for head-to-head landing page optimization tests is the *unpaired one-tailed equal-variance t-test*. The test produces a single number as its output. The higher this number is, the higher the statistical certainty that the two outcomes being measured are truly different.

Lest you be scared by the imposing name of the test, let me assure you that it’s very easy to compute and requires only basic spreadsheet formulas.

*Submissions are now open for the 2009 Search Engine Watch Awards. Enter your company or campaign before July 17, 2009. Winners will be announced at SES San Jose.*

#### Related reading

## Six ways you can watch your competitors watching you

You watch your competitors, they watch you, so let’s see what they're looking at.

## Five ways analysts can take their skills to the next level

As I’ve been writing about tools and tactics quite a bit lately, I thought for this month’s column I’d take a step back and share some ideas on how you can become a better analyst.

## Here’s a new way to track AdWords sitelinks in Google Analytics (which you may have missed)

Google Analytics quietly rolled out a new feature for measuring AdWords performance in April that could have easily been overlooked, but is packed with data and happens to be one of my favorite features right now.

## Five steps to report marketing results like a boss

If you don’t have a boss that expects you to deliver results reports on your programs today, you will in the future. ... read more