Incrementality is harder than it sounds
What people really mean when they ask about incrementality
In theory, incrementality is a clean question.
What was the causal impact of our marketing activity, compared to what would have happened anyway?
In practice, that’s rarely how the question is asked.
Most leaders don’t use the word “incrementality” at all. They ask about forecasts. They ask what results to expect if budgets change. They ask whether a strategy is “working”, and why results look the way they do.
Incrementality tends to appear later, usually when approval is being sought or challenged. When a strategy needs defending. When a budget needs justifying. When someone wants confidence that an investment isn’t just riding organic demand.
That’s not curiosity. It’s anxiety.
Underneath the word is a much simpler concern: how do we know what actually happened because of us? What portion of this outcome would have occurred anyway, without the spend, without the campaign, without the channel?
Because without an answer to that question, it’s hard to feel confident scaling, cutting, or changing anything. You’re left reacting to surface-level results, hoping they mean what you think they mean.
This is why incrementality gets talked about as if it’s a technical problem, when it’s really a decision problem.
Incrementality as a promise
One of the reasons incrementality is so often misunderstood is that people try to turn it into a metric.
It isn’t.
Incrementality is closer to a promise. Or, more accurately, the measurement of a promise.
“If we do X, we expect Y.”
“I did X, and I got Y.”
“And Y would not have happened if we hadn’t done X.”
Knowing that last part is the hard bit.
It’s also why incrementality isn’t an attribution model, or a reporting view, or a channel-level metric. Attribution can help explain how outcomes are distributed. Incrementality is trying to answer whether the outcome exists at all because of the activity.
This is where a lot of teams get tripped up. Channel performance in isolation is often treated as incremental by default. If a channel produces conversions, revenue, or leads, it’s assumed to be contributing causally.
But without understanding substitution, overlap, and organic behaviour, that assumption is fragile. You don’t know whether the channel created demand, captured existing demand, or simply re-labelled it.
Incrementality forces you to confront that distinction. And once you do, some very comfortable narratives start to wobble.
Why incrementality is underestimated
Incrementality sounds like a measurement challenge. In reality, it’s an organisational one.
People underestimate it because they focus on the mechanics, how to test, how to model, how to analyse, without appreciating the scope of what it unlocks.
Done properly, incrementality gives you a language that transcends marketing.
If you can credibly demonstrate that investing X produces Y that wouldn’t have happened otherwise, you’re no longer arguing for budget on the basis of activity or efficiency. You’re talking in terms that finance understands. Operations understands. Leadership understands.
That’s powerful.
It’s also why incrementality is often resisted, diluted, or reduced to a buzzword. Proving causality doesn’t just validate good strategies. It invalidates bad ones. It challenges assumptions that teams, leaders, and even entire functions may have built around.
For in-house teams, pushing for true incrementality can require political capital, trust, and patience. If things are “working”, there’s often little incentive to rock the boat. If things aren’t working, it’s even harder to get the support required to run the right experiments.
For external teams, the incentives can be similar. If the client is happy, why introduce uncertainty?
This is why incrementality is talked about far more than it’s practiced. Not because it isn’t valuable, but because it asks organisations to confront uncertainty honestly, and to act on the answer when it isn’t flattering.
What incrementality depends on
Incrementality doesn’t sit on top of your measurement setup. It sits inside it.
Before you can meaningfully talk about causal impact, a few things need to be true. Not theoretically, operationally.
First, there has to be clear agreement on what success actually means. Not just a KPI, but a shared understanding of what is being improved and why. If different stakeholders are optimising for different outcomes, any conversation about incrementality will collapse into debate before it gets anywhere useful.
Second, there needs to be some understanding of what happens without marketing intervention. Incrementality only makes sense relative to a baseline. If you don’t have a view of organic behaviour, what demand looks like in the absence of activity, you have nothing to compare against. Everything becomes uplift by assumption.
Third, there has to be intent. Strategies and actions need to be clearly defined and repeatable. Incrementality can’t be measured against vague activity or one-off changes. If you can’t articulate what you did, why you did it, and how you would do it again, you can’t credibly claim to understand its impact.
These foundations are easy to gloss over because they don’t feel like “measurement work”. They feel like alignment work. But without them, attempts to measure incrementality tend to do more harm than good.
Teams start asking questions they aren’t equipped to answer. Results become ambiguous. Confidence erodes. Trust weakens, not because the analysis was wrong, but because the system wasn’t ready to support the conclusion.
This is where lack of trust quietly distorts the entire conversation. Incrementality requires time, investment, and expertise. Without trust, teams don’t get the space or support to do the work properly. Experiments get watered down. Results get second-guessed. Uncertainty becomes something to hide rather than something to learn from.
Ironically, the organisations most eager to “prove incrementality” are often the least prepared to hear the answer.
Incrementality isn’t something you bolt on to an existing setup. It’s something that emerges when measurement is intentional, systems are designed to learn, and the organisation is willing to sit with uncomfortable truths long enough to act on them.
Experiments versus narratives
In theory, incrementality is best answered through experiments.
In practice, most organisations prefer narratives.
Narratives feel safer. They imply control. They allow leaders to believe that outcomes are the result of deliberate choices rather than uncertainty. They allow marketers to appear confident, decisive, and in command of the situation.
Experiments do the opposite. They admit uncertainty up front. They accept that outcomes might be worse before they’re better. They create the possibility of answers that are inconvenient, politically awkward, or hard to act on.
This is why experimentation feels uncomfortable long before it becomes technical.
There’s an unspoken expectation in many organisations that marketers should “know what to do”. Admitting the need to experiment can feel like admitting a lack of expertise, especially when there isn’t a strong measurement system or trust already in place.
The irony, of course, is that if anyone truly knew exactly what to do from day one, there would be little need for measurement at all.
Without trust and agreement, experiments become risky in ways that have nothing to do with data. A holdout test on a major channel might temporarily reduce volume. A change designed to learn might look like a step backwards in a weekly report. Even when the long-term value is clear, the short-term optics can be hard to defend.
So teams compromise.
They run tests that are small enough not to threaten results. They frame changes as optimisations rather than experiments. They avoid scenarios where the answer could meaningfully challenge existing strategies.
This is where pseudo-experiments creep in.
Activity is labelled as testing, but the outcome is never allowed to be truly negative. Control groups are diluted. Timeframes are shortened. Conclusions are drawn selectively. The organisation gets the appearance of rigour without having to confront its implications.
None of this happens because people don’t care about learning. It happens because the system hasn’t made it safe to learn.
True experiments require something many organisations struggle to provide: patience, trust, and a willingness to temporarily trade certainty for understanding. Without those conditions, incrementality becomes something that’s discussed in theory and avoided in practice.
And when narratives consistently win over experiments, organisations don’t just fail to measure incrementality. They lose the ability to discover it at all.
When the answer is uncomfortable
The real test of incrementality isn’t whether you can measure it.
It’s what you do when the answer isn’t what you hoped for.
Negative or weak incrementality is an outcome many organisations say they want to understand, but very few are prepared to act on. When analysis suggests that a channel, tactic, or long-running investment isn’t actually driving incremental outcomes, things get complicated quickly.
This is especially true for channels that have been in place for a long time. Over time, they develop institutional credibility. Other teams come to rely on them. Sales assumes they work. Operations plans around them. Leaders see them as part of the business-as-usual engine.
When incrementality challenges that assumption, the data isn’t just questioning performance. It’s questioning a shared belief.
Cutting back or removing a non-incremental channel is rarely a clean decision. Even when the analysis is sound, the organisational impact can be messy. If results dip afterwards for unrelated reasons, the change is often blamed. Questions start surfacing about whether it was the “right call”, whether the decision was rushed, or whether the channel should be reinstated “just to be safe”.
This is where trust becomes decisive.
Without trust in the people doing the work, uncomfortable answers are treated as risks to be managed rather than signals to be acted on. Data is scrutinised not to understand it, but to discredit it. The conversation shifts from what is this telling us to why we shouldn’t believe it.
Even well-intentioned leaders can fall into this trap. The pressure to maintain stability, protect results, or reassure stakeholders can outweigh the longer-term benefits of acting on incremental insight.
This is why incrementality can’t be introduced as a purely analytical exercise. It has to be communicated carefully, framed clearly, and supported explicitly. Leaders need to understand not just the conclusion, but the intent behind the work, what was tested, why it was tested, and what will change as a result.
When that context is missing, even correct answers can lose credibility.
Handled well, negative incrementality becomes one of the most valuable outcomes a team can produce. It creates focus. It prevents wasted effort. It opens space for better strategies to emerge.
Handled poorly, it becomes a political liability, something teams quietly avoid, soften, or explain away.
The difference isn’t the data. It’s whether the organisation has built the trust and shared understanding required to accept what the data is saying, even when it’s inconvenient.
Who incrementality is really for
On the surface, incrementality is framed as something organisations pursue to scale. To spend less wastefully, and get more from the same investment.
That’s true, but incomplete.
It’s not just the organisation as a whole who benefits most from incrementality being genuinely understood. There’s an upside for operators well.
When causal impact is clear, teams can make better decisions, scale with confidence, and stop arguing over symptoms rather than causes. Budget conversations become more grounded. Strategy discussions become more constructive. Energy shifts from defending activity to improving outcomes.
But incrementality also redistributes power.
When you can credibly demonstrate what does and doesn’t drive outcomes, it challenges narratives that may have gone unquestioned for years. It weakens the influence of opinions that aren’t backed by evidence. It reduces the ability to hide behind vanity metrics or inherited assumptions.
That can feel threatening.
Not because leaders or teams are acting in bad faith, but because incrementality introduces a level of accountability that isn’t always comfortable. Plans become easier to interrogate. Decisions become harder to justify without substance. Long-standing initiatives are exposed to scrutiny they may never have faced before.
This tension is rarely acknowledged openly. Incrementality is usually framed as “more results” or “better efficiency”, outcomes everyone can agree on. The deeper implications are often discovered only once the system starts working.
Experienced operators recognise this dynamic, even if they don’t articulate it explicitly. They’ve been in rooms where uncomfortable questions surface late. They’ve felt the pressure to smooth results, frame narratives, or move on quickly rather than challenge assumptions. They know that understanding incrementality changes how those conversations play out.
For teams earlier in their journey, this can feel aspirational. For teams who’ve lived through it, it feels necessary.
Incrementality isn’t just about knowing what works. It’s about changing who gets to decide what “working” means, and on what basis.
The real value of incrementality
Most organisations say they want to understand incrementality because they want more efficient results.
They want to know what isn’t working so they can cut it without hurting performance. They want confidence that spend is justified. They want reassurance that growth is coming from the right places.
Those are reasonable goals. But they’re not the full value of incrementality.
The real value isn’t efficiency. It’s scalability with confidence.
When incrementality is understood, teams aren’t just optimising what exists. They’re building the ability to make decisions that hold up under scrutiny. They can explain why something works, not just that it appears to. They can scale strategies without relying on hope, habit, or inherited assumptions.
More importantly, they can do this without resorting to narrative gymnastics when results fluctuate. Weak periods become data points, not failures. Strong periods become signals, not coincidences. The organisation spends less time reacting and more time learning.
This is why incrementality ultimately depends on people as much as measurement. It requires leaders who are willing to hear uncomfortable answers, and teams who are trusted to pursue truth over optics. It requires patience, alignment, and a shared commitment to learning, even when the outcome challenges what everyone expected.
Most organisations don’t have a data collection problem. They don’t even have a modelling problem.
They have a confidence problem.
Incrementality, done properly, doesn’t just tell you what works. It gives you the confidence to act on it, to cut, to scale, to change course, knowing why you’re doing it.
And that’s what turns measurement from reporting into leverage.