AnalyticsLanding Page Optimization Pitfall — Not Collecting Enough Data

Landing Page Optimization Pitfall -- Not Collecting Enough Data

In statistics, results do not become stable until a large enough sample is tested. Accordingly, making landing page decisions using data from a too-small sample size can lead marketers to make bad decisions.

One of the advantages of online marketing in general (and landing page optimization in particular) is the ability to measure everything. All online marketing campaigns and programs should be run “by the numbers.” The difficult part is in knowing which numbers to use and when.

Many online marketers like to watch the pot boil. They come to resemble stock market day traders who get hooked on the action and make frequent changes to their programs. Some even use automated tools (e.g., for PPC bid management) to make decisions more frequently than is humanly possible.

There is nothing inherently wrong with automated tools or frequent changes per se, but each change must be made after collecting an appropriate amount of data. Unfortunately, this is often not the case in practice.

Let’s consider the dangers of small sample sizes. Let’s assume that you have had four visitors to your e-commerce website. One of the visitors has bought something. So what is your conversion rate of visitors to sales?

If you answered 25% (one sale out of four visitors), you are probably way off. Most e-commerce sites we have seen range from 1% to 5%. Unless your product is unique, indispensable, and available for sale only from your site, the 25% conversion rate is highly unlikely.

Similarly, if you had no sales after the first four visits, you would probably be wrong to conclude that your conversion rate was really zero, and that you would never get a sale. This example may seem a bit extreme, but too many landing page tests are decided with inappropriately small data samples. And remember that your sample size should be expressed in the number of conversion actions, and not in the number of unique visitors.

Once you start the data collection in your test, resist the temptation to monitor the results frequently, especially early in the test. Otherwise, you run the risk of getting on an emotional roller-coaster caused by the early streaks in the data. One moment you may be euphoric about excellent results, and the next you may be despondent as the indicated improvement vanishes like a mirage.

So pick a statistical confidence level in your answer and wait until you have collected enough data to reach it. You need to have the self-discipline not to even look at the early results.

Remember, the statistics Law Of Large Numbers tells us that our measured conversion rate average will eventually stabilize at the actual value. But it does not tell us exactly how long this will take. It is possible to have large deviations from the actual mean early in the data collection.

Even if you are confident in the fact that your new landing page converts better than the original, you still do not have a clear indication of exactly how much better it is if you haven’t collected enough data.

Do not simply use the observed difference in lift as your reported answer. Always present it with the correct error bars (also called “confidence intervals”). I strongly urge you to review a primer on basic statistics to solidify your understanding.

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

9m
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

11m
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

1y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

2y