top of page

Stop Guessing and Start Measuring: A Guide to Implementing Incrementality

Incrementality is a relatively new term, but it’s not a new concept.

Incrementality is the measure of the true value created by any business strategy, determined by isolating and measuring the results it caused, independent of other potential business factors.

Incrementality is calculated by comparing differences in outcomes between two separate groups of people: those who’ve been exposed to the strategy and those who haven’t.

For years, businesses have been able to effectively measure how choices, in areas such as product testing or direct mail, incrementally influence their business outcomes. This same method of measurement is possible with digital advertising.

In this guide, We will walk you through some of the essential questions and considerations to make along the path to adopting incrementality.

These best practices can help make your strategies both successful and impactful.

Are you ready to embrace incrementality?

Incrementality measurement allows you to make smarter business decisions by helping you understand how and where marketing is contributing to your business outcomes.

Before shifting your measurement models to incrementality, it’s important to ensure that your organization is ready to make the investment and do the work needed to embrace a data-driven, test-and-learn approach.

Step #1

Choose a business question

Your first step is to decide which business question you want to answer. Once that is established, take time to consider your options and be deliberate about what you want to measure and how you’d like to measure it.

Step #2

Choose a method

There are multiple techniques you can use, which fall into two main categories: experimental and observational.

The most effective way to measure incrementality is to run an experiment in which you tightly manage the strategy, or “treatment,” to which people are exposed.


Begin by developing a hypothesis about the effect your change in strategy will have. Then designate a group (or groups) of people who will be exposed to the treatment, and a control group who will not.

By isolating the exposure of a variable, such as creative or audience, and then comparing it to the control group, you can understand the true incremental value of the strategy. The quality of experiments may vary, but they are still the ideal and most accurate way to measure incrementality.


Begin with an existing set of data that resulted from exposing people to a certain ad or ad variable, and then apply a model or statistics to estimate how much value a treatment may have had.

Common methods involve using synthetic experiments to attempt to replicate a real experiment by “finding” a control group within a group of people who were not exposed to the ad or ad variable you are trying to evaluate.

For example, you could evaluate the effect of a technical issue that only impacted some users by finding a “similar” group of people who were unaffected. This method does not require up-front work, but it may be less accurate and subject to bias on unknown factors. It does also require advanced methods and support from data scientists later in the process.

Questions to ask at every step to ensure quality experiments

Once you have decided to run an experiment—and determined what business question you want to answer—we recommend that you start by asking questions about what and how you’re measuring.

To guide your experiment, establish each of the following:

  • A hypothesis that isolates a single variable: Have I isolated the question I want to answer?

  • Precision: Will my test provide enough data to accurately answer the question I’m asking?

  • Stability of treatment assignment: When someone is assigned to a treatment group, can I be sure they’ll stay in that group for the entire test?

  • Realistic exposure: Does my experiment act like it would in the real world?

  • Comparability between treatment and control groups: Do the test groups and control groups have the same characteristics and propensity to take action?

Although these questions apply specifically to experiments, it’s a good idea to apply a similar set of thinking when evaluating observational methods.

A hypothesis that isolates a single variable

To truly understand the effect of a treatment—such as the difference in performance between two different campaigns—it’s important to determine up front what you want to test, and then to isolate that variable by ensuring it is the only difference between your test and control groups. This will allow you to confidently conclude that it was indeed the variable that caused the effect on ad performance.


If you are trying to test the effect of delivery frequency for direct mail, only change the frequency of delivery, while leaving creative, product advertised and messaging the same.

If you are trying to test the effect of budget on ad campaign performance, only change the budget allotted to each treatment group, while leaving targeting, bid value, creative, timing and execution the same.


Experiments, as with any statistical measure, come with some level of variance. And, while the variability in experiments does reflect the variability present in the real world, it can affect your ability to learn the true effect of the treatment you’re testing—as well as your stakeholders’ perception of your experiments’ reliability.

When running experiments, make sure that your test is set up with enough precision to measure what you’re trying to test.

It will take more data to reveal smaller differences, and less data to see results proving bigger differences.

Be sure your test is planned and executed in such a way that you can confidently answer the question you’re trying to address.

Stability of treatment assignment

A key component of any experiment is making sure that each test audience stays in the treatment group to which they are assigned throughout the length of the test and across whichever devices and platforms you’re measuring.

This way, the people who aren’t supposed to see a treatment don’t see it, and the people who are supposed to see a treatment do see it, and at your intended cadence.

In some cases, the duration of the test may exceed the limits of a testing platform.

For example, the longer your experiment, the more difficult it will be to ensure that your control group is not exposed to a treatment. If so, consider revising the campaign’s duration to maintain the stability of your treatment assignment.

Realistic exposure

Your testing environment should mimic the real world as closely as possible.

From a platform perspective, ensure that the platform is treating your test campaigns like your normal ad campaigns.

From an execution perspective, deliver media that is representative of what you’d typically use.

You also want to make sure that the people in your test are exposed to outside media and campaigns with the same frequency as they would be under normal circumstances.

Some test designs might make this difficult due to technical limitations, which could cause interaction effects between your ads. For example, not withholding a campaign from a control group for one round of testing could cause your other campaigns to over-deliver to that group.

While this is difficult to prevent on digital platforms, you may be able to validate this during or after the test.

You may have non-test treatments that are a result of actions taken by people during a test, like retargeting campaigns aimed at users who visit your site. Exposure to these treatments doesn’t need to be equal across groups, as long as the events that trigger them—and the treatment people receive afterward—aren’t affected by the test treatment.

Comparability between treatment groups and control groups

The best experiments make sure that the groups of people being compared are statistically similar. When evaluating this, look at a few dimensions:

  • Characteristics: Are the people you’re comparing similar across dimensions like demographics, product engagement and utilization?

  • Outcomes: Do the groups you’re comparing have a similar propensity to use or to purchase products, as measured by pre-treatment conversion rates?

  • Outside and prior exposure: Have these groups historically been exposed to treatments or ads at the same rate? Is the media to which they’re exposed the same across channels outside of the platform you’re testing?

Questions to ask to ensure quality observational methods

Sometimes high-quality experiments are either unavailable or too onerous to run, so you may need to use observational methods.

When using observational methods to develop a test for incrementality, your first question should be: “Can I validate this against a true experiment?”

Observational techniques are meant to mirror the results of an experiment, so the simplest and most effective way to evaluate the quality of observational techniques is to compare them to experiments you’ve run in the past.

Broadly, there are two categories by which you can evaluate the observational method model against an experiment:

Accuracy: How close am I to the true incrementality of the treatment represented by an experiment?

Decision-making: How often would my observational model choose the actual winner, like a true experiment would?

If it’s not possible to validate your observational method against an experiment, and you’re using a synthetic control, focus on making sure that a balance exists between your treatment group and synthetic control group. Even if you’re making optimization decisions using observational metrics, you should see your KPIs improve over time based on your decisions.

Source: Meta - Facebook

7 views0 comments


bottom of page