Skip to content
18 September 2025

Why Only Measuring Success is Holding Us Back

In the creative and cultural sectors, we’re very good at sharing our success stories. Annual reports and funding applications tend to spotlight what went right, but we rarely talk about the projects that struggled, or the ideas that never quite worked. Those stories matter just as much, and without them, we risk building a picture of progress that’s only half true.

This tendency is known as survivor bias. It happens when we only look at the examples that made it through, and overlook the many that didn’t. In evaluation, survivor bias means focusing on successes while quietly leaving failures out of the picture. The result is a skewed understanding of what’s really happening, and missed opportunities to learn from our mistakes.

Why survivor bias matters in evaluation

Evaluation is meant to help us understand what works, what doesn’t, and why. Done well, it strengthens decision-making, supports funding bids, and drives continuous improvement. But if evaluations only record achievements, the learning potential is cut in half. We repeat mistakes, set unrealistic expectations, and create a narrative of success that might impress in the short term but fails to support genuine long-term progress.

This isn’t about being negative or critical for its own sake. Failures, partial successes, and unexpected outcomes are valuable. They highlight blind spots, reveal barriers, and help us adapt. By acknowledging them, we create a fuller picture that ultimately benefits organisations, funders, and communities alike.

When partial success gets framed as full success

Even the most carefully designed initiatives are not immune to survivor bias. Take the government’s Create Growth Programme (CGP). According to its 2024 evaluation, the initiative succeeded in boosting participants’ confidence and business skills. Many organisations reported feeling better prepared to innovate and develop their work.

However, the hoped-for increase in external investment and product growth was far less evident at this stage. On paper, the programme could easily be described as a success, and indeed, much of the reporting framed it that way. But if we stop there, we miss the critical learning about what hasn’t happened yet, or what barriers are still in place. The evaluation itself makes this point, noting the gap between early gains and more substantial, long-term outcomes.

This isn’t failure in a negative sense, as the programme did achieve meaningful things. But it shows how survivor bias can creep in: if we only highlight the elements that worked, we risk drawing conclusions that are overly optimistic, and overlooking any adjustments needed for the future.

What gets lost when failure is ignored

The consequences of survivor bias in evaluation go beyond missed insights. As recent research in the Research Evaluation journal points out, failures in impact evaluation” can be just as useful for learning as successes. When reports exclude them, we lose opportunities to learn systematically.

There are several risks here:

  • Repeating mistakes: if shortcomings aren’t documented, future projects may unknowingly run into the same barriers.
  • Skewed expectations: funders and stakeholders may assume results are easier or faster to achieve than they really are, creating pressure for unrealistic outcomes.
  • Reduced innovation: when only winners” are showcased, organisations become risk-averse, sticking to safe bets that are easier to justify rather than trying bold, experimental approaches.
  • Hidden inequities: failure often exposes structural barriers (for example, around access or inclusion) that success stories gloss over. By not naming these, progress stalls.

In short, leaving failure out of evaluation isn’t just incomplete reporting, but it can actively weaken our capacity to adapt and grow.

How we can do better

So what can we do to make evaluation more honest, useful, and resistant to survivor bias? Here are a few practical approaches:

  • Balance the story: include not just what worked, but also what didn’t. Even a short section on challenges and lessons learned makes a huge difference.
  • Use mixed methods: combine quantitative measures (numbers, attendance, growth) with qualitative insights (stories, feedback, experiences). Numbers show scale, but narratives explain impact.
  • Make space for reflection: build time into projects to discuss failures openly, without fear of blame. This demonstrates that learning is valued more than appearances.
  • Think long term: some impacts take years to emerge. Being transparent about what hasn’t yet materialised helps manage expectations and makes future evaluations more robust.

Moving beyond survivor bias

Success is worth celebrating, but it’s not the whole story. To build stronger, more sustainable projects, we need evaluations that capture the full picture: the experiments that worked as well as the ones that didn’t. That’s how we move beyond survivor bias and create lasting change.

If you’re interested in developing more balanced, practical evaluation approaches, our upcoming Measuring Impact & Evaluation course on December 3 explores exactly this, and will help to ensure that your evaluations are honest, useful, and future-focused.

Alternatively, you can explore our self-guided eLearning course on Embracing Failure in Practice at your own pace. Through critical reflection and probabilistic thinking, it offers tools to embrace failure, learn from it, and make informed decisions. By recognising the role of failure in personal and professional growth, you’ll develop a mindset that supports more holistic, sustainable approaches to projects.

Subscribe to our newsletter

For updates on our programmes, training and opportunities.

Groups