Big News! Split is now part of Harness. Learn more at Harness and read why we are excited by this move.

10 Tips for Running Experiments With Low Traffic

Contents

Split - 10-Tips-For-Running-Experiments-With-Low-Traffic

We can’t all be the most popular application or website. So what happens when you have low traffic but you want to run experiments to learn about your users?

If you’re involved in an organization’s experimentation program, all teams rely on statistical rigor and confidence in the data they are seeing. Whether it’s the product and research teams problem solving user issues, the designers and engineers building the solutions, or the analysts reviewing the data, experimentation is very important.

Statistical significance represents the likelihood that the difference in your metric between a selected treatment and the baseline treatment is not due to chance. Your significance threshold is a representation of your organization’s risk tolerance. Formally, the significance threshold is the probability of detecting a false positive, and in most organizations tends to be set to 95% confidence. This requires high traffic volumes to ensure a reasonable experiment run time of days rather than weeks and months.

Low-traffic testing is defined as not having the volume of traffic needed to test outcomes across your website, app, journey, or pages. Therefore, not having the sample size required for measuring the statistical significance when running experiments.

As a result, DevOps teams are not able to reach statistical significance in a reasonable amount of time. They are running experiments for weeks and months, and are unable to draw reliable conclusions from the data. So does trying to experiment with low traffic become futile? Should we save our time, effort, and resources on something with more gain? The answer is no.

These challenges can be overcome by paying careful attention to the design and hypothesis of your experiments. While there are some nuances and considerations with experimenting and measuring with low traffic, there are still many benefits to gain. When there is low traffic, your statistical measurement may not be as rigorous, so it’s important also to consider qualitative feedback. Here are 10 tips for running experiments with low traffic:

1. Stick to Two Variations

You should limit your testing to an A/B split.

You’ll want to avoid more than two variations at a time and design a series of consecutive experiments for the same hypothesis before drawing conclusions. Reducing to two variations allows for more traffic to be distributed among fewer experiences.

2. Test Bold Changes

Be sure to test dramatic changes your visitors are likely to notice.

  • Test high-impact changes that are likely to affect your primary metric.
  • Test changes a visitor must engage with and/or directly solves a user problem.
  • Send your designs to others in the business (those not close to the project) to check whether they notice a change.
  • Important to note that experimenting with bigger changes or multiple elements on a page will more likely detect significance.

3. Test Changes Above the Fold

100% of visitors see content above the fold.

  • Ensure that visitors see and can engage with the change being tested.
  • Check your scroll depth data for the page(s) in question to see where user drop-off points are.
  • If testing below the fold, consider creating custom impressions to fire an event at certain scroll depth points (e.g., reached 30% of the page, that way, you can track visitors who were exposed to the change).

4. Be Thoughtful With Targeting

Use on a case-by-case basis.

  • Targeting can reduce the noise in your sample to measure those who matter and help detect effects.
  • However, it can significantly reduce your sample size and result in an underpowered test (inconclusive).
  • It can also limit the generalizability of your results if you plan to apply the learnings to a universal population.

5. Test Across Shared Page Templates or Layouts

Testing across similar or shared page templates increases the traffic available.

Group pages together where it makes sense and experiment at the template level instead of the page level (e.g., product category pages, product detail pages, landing pages, search pages, etc.).

6. Increase Statistical Significance Threshold

You don’t need to use a 0.05 significance threshold every time.

  • Statistical significance relates to the amount of risk you can accept. You and your organization may be able to accept a higher amount of risk for low-risk experiments.
  • Set a higher significance threshold (e.g., 0.10) where there is lower risk (e.g., higher up the funnel, landing pages).
  • The “best practice” for significance threshold is 0.05 or less for running experiments that have a higher degree of risk (e.g., checkout funnels).

7. Use the Right Primary Metric

Measuring micro-conversions increases chances for success.

  • The journey to a macro-conversion starts with micro-conversions.
  • Make sure your primary metric is on the same page as your change being tested.

8. Leverage Seasonality

Test during peak traffic times to maximize insight.

  • Peak traffic seasons (holidays, special events, special offers, new launches) may give you the traffic size you’re looking for, but be mindful that behavior during special events and seasons may not apply in non-peak seasons.
  • Align with your marketing team on scheduled offers and events calendar, which will also drive traffic to the site and communicate about the strategies.

9. Some Data is Better than None

Inconclusive data can still inspire new hypotheses and iterations.

  • An inconclusive or nonsignificant outcome still is more data than you started with—inconclusive means that the effect is not large enough between the variations given the circumstances. Take the information and insight you learn from your test to iterate.
  • Inconclusive metrics for a split that does not reach statistical significance can be confusing and disappointing. It means the data does not support your original hypothesis for that metric (unless you were running a ‘do-no-harm’ test), and there is not enough evidence to conclude that the treatment had any actual impact.
  • Don’t be discouraged by inconclusive metrics! They are also very valuable; while not having your hypothesis validated might feel disappointing, this actually is one of the main ways experimentation can bring value.
  • Leverage these figures to draw some conclusions to help iterate and form new hypotheses.
  • Use these to make decisions on next steps if an MVP.

10. Utilize a Different Traffic Type

For B2B/account-based traffic, switch to user traffic type.

  • Unless you need to maintain an identical experience for all users in a particular account, implement a ‘user’ traffic type for experimentation and measurement, instead of using an ‘account’ traffic type.
  • Creates an exponentially larger sample size.
  • Can be done on a case-by-case basis depending on the experiment’s success metrics and sample size needed.

Experiment Until You Can Make Flawless Decisions

A product manager’s role is to understand what matters most to their customers, to ensure they are building the right products and continuing to innovate their offering. Customers’ ever-increasing high expectations of their user experience means product managers are turning to experimentation to avoid the opportunity cost of which ideas to go with.

So, even if your site isn’t bringing in huge numbers, it’s important to remember that experimentation is about learning from your customers while leveraging the data you have to make the best decisions. Low-traffic testing is possible with the recommendations given above, so apply these principles when running experiments!

Get Split Certified

Split Arcade includes product explainer videos, clickable product tutorials, manipulatable code examples, and interactive challenges.

Switch It On With Split

Split gives product development teams the confidence to release features that matter faster. It’s the only feature management and experimentation solution that automatically attributes data-driven insight to every feature that’s released—all while enabling astoundingly easy deployment, profound risk reduction, and better visibility across teams. Split offers more than a platform: It offers partnership. By sticking with customers every step of the way, Split illuminates the path toward continuous improvement and timely innovation. Switch on a trial account, schedule a demo, or contact us for further questions.

Want to Dive Deeper?

We have a lot to explore that can help you understand feature flags. Learn more about benefits, use cases, and real world applications that you can try.

Create Impact With Everything You Build

We’re excited to accompany you on your journey as you build faster, release safer, and launch impactful products.