Measurement beyond dashboards
A common misunderstanding I see in organisations is what “measurement” actually means.
When leaders talk about performance, they rarely ask for measurement explicitly. They ask for reporting. They ask for explanations. They ask why results look the way they do, or what changed from last period.
On the surface, those don’t sound like measurement problems. But they usually are.
What leaders are really asking for is clarity. They want answers they can stand behind. They want confidence that decisions are being made on something more solid than intuition or selective metrics. Measurement is often the means to that end, even if it isn’t named as such.
The problem is that measurement is still widely treated as a reporting exercise, rather than a system that actively shapes outcomes.
Measurement is not dashboards
Dashboards are where measurement shows up. They are not where it starts.
Good measurement isn’t defined by how clean a report looks or how many metrics it contains. It’s defined by whether the data being collected actually improves decision-making.
In performance environments, this distinction matters because most systems now operate algorithmically. Campaigns, bidding, targeting, optimisation, all of it runs on feedback loops. And those loops are only as good as the signals they receive.
The rule is simple. Poor input leads to poor output. Better input leads to better output.
Feeding a platform a basic signal, “one conversion happened at this time”, gives it very little context. Feeding it richer signals, conversion events enriched with first-party and third-party data such as click IDs, lead quality, customer attributes, sale value, or lifetime value, gives it far more clarity about what success actually looks like.
That context doesn’t just help the algorithm. It helps the people responsible for interpreting performance, forming hypotheses, and deciding what to do next.
This is where many organisations fall short. They collect far more data than they use. Measurement becomes a passive artefact rather than an active input into decision-making.
Collecting data without using it is not neutral. It introduces risk without increasing value. Measurement only earns its place when it changes behaviour, decisions, or outcomes.
Measurement is collection and usage.
Why better inputs change results
Most senior leaders understand that algorithms exist. Fewer understand how sensitive those systems are to the quality and structure of the data they’re fed.
That’s not a failure on their part. As leaders move up, they spend less time in tools and more time managing people, priorities, and trade-offs. In fast-moving performance environments, even a short time off the tools can create meaningful blind spots.
This is why the responsibility can’t sit with leaders to know exactly what to ask for.
It sits with the Performance Lead.
Their job is to understand what should be measured, why it matters, how it is measured today, how it could be improved, and what that improvement will unlock. Just as importantly, they need to decide how much of that needs to be explained, and in what language.
Leaders don’t need mechanics. They need impact.
Optics, trust, and uncomfortable trade-offs
Measurement inevitably intersects with optics.
Leaders rarely frame it this way, but many care, at least in part, about how results are perceived upward. That isn’t inherently bad. Optics can be a tool to secure support for better systems, more resourcing, or necessary change.
Problems arise when optics become the goal rather than the means.
Vanity metrics creep in. Narratives become selective. Reporting reassures without improving incrementality.
As economist Ronald Coase put it, “If you torture the data long enough, it will confess to anything.”
A good Performance Lead knows how to frame results to look positive.
A great Performance Lead knows when to do that, and when not to.
That distinction is where trust either compounds or erodes. And it often creates tension, particularly when performance is under pressure and reassurance is easier than explanation.
The part of the job that’s easy to underestimate
Many Performance Leads underestimate how much of their role is translation.
Not presentation. Translation.
It’s not enough to make good decisions. You need to explain what those decisions change, what they cost, what they unlock, and how they affect the organisation as a whole.
Performance language does not map cleanly to leadership language.
A leader might hear “conversion rate is down” and assume performance has worsened, without the broader context that reach expanded significantly, total sales increased, and downstream channels benefited as a result.
If that translation doesn’t happen, trust weakens. Support stalls. Decisions slow down.
Being effective in this role now requires fluency in two domains, performance and leadership. The ability to move between them is not a soft skill. It’s a core competency.
When better measurement doesn’t immediately work
Improving measurement doesn’t guarantee immediate uplift.
Sometimes trust doesn’t follow because the change wasn’t communicated well. Sometimes the data exists but isn’t used effectively. Sometimes external factors, macro conditions, seasonality, competitive shifts, overwhelm even well-designed systems.
And sometimes the hypothesis was simply wrong.
That isn’t failure. Improved measurement increases the organisation’s ability to run better experiments. Negative results are still data. They refine understanding rather than undermine it.
The mistake is expecting measurement to remove uncertainty. Its real value is helping organisations make better decisions in spite of it.
Beyond dashboards
Better measurement isn’t about more metrics or better reports. It’s about building systems that create clearer signals, better decisions, and greater confidence.
For Performance Leads, that responsibility doesn’t end at implementation. It extends to usage, interpretation, and communication.
Dashboards are visible. The system behind them is where the real work happens.
And the impact of that system is felt long before it ever shows up in a report.