Author: George Liacos
For the better part of a decade, the social sector has been increasingly obsessed with demonstrating impact. Donors demand it, boards request it, governments fund based on it, and consultants (myself included) talk about it relentlessly. But here’s the inconvenient truth: while the movement towards Impact Measurement is noble, the practice is riddled with traps that can send even the best-intentioned organisations down expensive and unhelpful rabbit holes.
When done well, impact measurement helps boards make better decisions, attracts funding, and creates clarity of purpose. When done poorly, it drains resources, demoralises staff, and tells you very little about what really matters. So, how do you harness the power of impact measurement without falling into its many traps? Let’s unpack the key concepts, the real-world challenges, and some practical ways forward.
First, the Framework: Theory of Change, Outcomes, and Metrics
The starting point for any impact measurement is a Theory of Change. This is not just a logic model or a list of activities. It’s a strategic hypothesis about how change happens in your context, for your cohort, based on your interventions. It should be ambitious, testable, and grounded in evidence.
From there, we move to an Impact and Outcome Framework. This is where we define what success looks like at various levels: outputs (what we do), outcomes (what changes), and impact (what’s ultimately different in the world). Then we layer on the measures and metrics: the quantitative and qualitative data points that help us know if we’re making progress.
But let me be clear: measures are not the strategy. They are tools. They only have value if they help us understand, learn, adapt, or prove.
Pitfall 1: Mistaking Activity for Impact
A common trap is equating outputs with outcomes. Just because we delivered 1,000 counselling sessions doesn’t mean mental health improved. One organisation we worked with in regional NSW had glowing activity reports—every box ticked. But when we engaged in lived experience interviews with clients, we discovered high levels of dissatisfaction. The support felt generic and time-constrained. The outcome—improved wellbeing—simply wasn’t there.
Tip: Don’t just count what you can measure easily. Ask: what do we really want to change? How would we know if that’s happening?
Pitfall 2: Proxy Metrics Masquerading as Data
Many organisations collect data points that are actually proxies—things that stand in for what we wish we could measure. For instance, attendance at a youth program might be used as a proxy for social connection. But attendance doesn’t tell us if young people felt a sense of belonging. Worse, these proxies can become performance indicators, distorting behaviour. Staff start pushing attendance instead of building meaningful engagement.
Tip: Where you can, use real data, not proxies. If you need to use a proxy, be explicit about its limitations and triangulate it with other methods.
The Balanced Method: Three Lenses on Impact Measurement
We recommend a blended approach to measurement: one that combines quantitative evidence, qualitative insight, and executive or technical opinion.
- Quantitative data is valuable when you have clean, consistent information tied to actual change (e.g. pre-post wellbeing scales, recidivism rates, or educational attainment).
- Qualitative data—especially from lived experience—is essential to understand nuance, context, and why something worked or didn’t.
- Executive or technical opinion draws on practitioner expertise to interpret what’s really happening. This is particularly important in complex, adaptive systems where causality is messy and non-linear.
A good example is a homelessness project we reviewed in Victoria. Quantitatively, it had stable housing outcomes for 68% of clients at 12 months. Qualitatively, clients reported improved dignity and agency. Caseworkers, however, flagged that many were cycling through short-term accommodation, suggesting deeper structural issues. Without all three lenses, the picture would have been incomplete.
Pitfall 3: Over-Engineering the System
Another trap is building an overly complex measurement system that no one uses. We were once brought in by a large youth service that had 47 KPIs across five funding programs. Staff were drowning in data entry. The metrics weren’t aligned, and worse, they weren’t being used to learn or improve.
Tip: Design your measurement system backwards. Start with the decisions you want to inform, then identify the minimum viable data set to support those decisions.
Pitfall 4: The “Prove, Not Improve” Mindset
Too often, measurement is driven by accountability rather than learning. Funders want to know if their money made a difference (fair enough), but this can create a culture where organisations are incentivised to tell a good story, not an honest one.
One mental health service we worked with in Queensland flipped this on its head. They introduced quarterly learning reviews using impact data to explore what was working, what wasn’t, and what needed to change. These sessions included frontline staff, executives, and clients. The result? Better programs, stronger culture, and eventually, more funding.
Tip: Shift your measurement culture from “proving” to “improving.” Use data to ask better questions, not just give better answers.
Pitfall 5: Chasing Certainty in a Complex World
In fields like mental health, disability, or social inclusion, linear logic often fails us. Change is multi-causal and deeply contextual. We can’t isolate variables in a clean lab experiment. Yet some boards or funders still expect hard proof of attribution.
Instead of pretending we can offer scientific certainty, we need to embrace contribution. Did our work contribute to change, alongside other efforts? Are we part of a story that’s moving in the right direction?
Tip: Use contribution language. Be honest about the role you played and what else may have influenced the outcome.
The Gains: When Impact Measurement Works
Despite the pitfalls, when done right, impact measurement is powerful. It:
- Clarifies purpose and aligns teams
- Improves program design and delivery
- Attracts the right funders
- Builds credibility and influence
Final Thoughts: Start with Purpose, Not Data
At the end of the day, impact measurement is not about numbers. It’s about learning. It’s about getting better at doing good. And it only works if it’s connected to your strategy, resourced appropriately, and embedded in your culture.
So ask yourself:
- What decisions do we want to make better?
- What story do we want to tell?
- Who needs to be part of that story?
Then build the simplest system that helps you get there. Not the most beautiful dashboard. Not the longest list of metrics. The system that helps you do better work.
That’s the real impact.
If you enjoy reading articles like this and want to be kept updated with our free resources, join our newsletter. We will send regular, informative, and helpful content. You can also read more content here.