I . Evaluation

Young people and youth volunteers do not have enough training or information to carry out these campaigns. How can we help them?

Do you want to be able to monitor how an activity you’ve developed – for example a fund-raising campaign – is doing; how successful it is; what benefits it has delivered, and how you can learn from it for the future?

This is how to do it.

This Section of the Toolkit takes you through the steps needed to design and implement evaluations. These evaluations can be done at different levels and scales. For example your organisation may have developed an Action Plan for the next five years, and you may want to design an evaluation that monitors progress on the Action Plan and assesses achievements at the end of the five year period. Your organisation may have secured funding from a donor to deliver a programme for disadvantaged young people, and this donor requires evidence that its money is being well spent. Or you may wish to focus specifically on how a particular fund-raising campaign is doing. In all of these cases you will need evaluation.

This Section shows you how to do it effectively. It shows:

  • Why you need evaluation?
  • When you need to do it?
  • The guiding principles you should follow
  • The actions you need to take
  • Some useful evaluation tools to use
  • Evaluation pitfalls and how to avoid them
  • Key resources to help you.

Many organisations fear evaluation. They think it’s all about measuring success and about punishing them for failing to meet their targets. In this context, evaluation is often seen in a negative light. But when it is used to help organisations learn how to do things better, evaluation is a very powerful tool to support change and innovation. Embedding evaluation intelligence in an organisation will support double loop learning’ for staff, clients, stakeholders and the organisation as a whole

To help organisations learn, evaluation needs to be used not just as a retrospective tool to assess performance, for example at the end of a funding year. Rather, it needs to be embedded within organisational systems and processes to support a cycle of continuous improvement. Essentially, the role of evaluation in organisations is not to drive perfection but to understand what is relevant, what can be controlled and what can’t, what is good enough and above all what can be applied from learned experience to help the organisation change for the better.

 

There are four reasons – or purposes – for doing evaluation:
Accountability

evaluation helps organisations collect and present evidence to stakeholders about the effectiveness of their activities. This is particularly important in fund-raising because donors who provide financial support are unlikely to keep providing that support if they don’t see value for their money

Knowledge

evaluation collects and analyses information that can then be used to provide evidence of success. It helps organisations understand who has benefited from their activities, in what ways and under which circumstances.

Operational Efficiency & Effectiveness

valuation helps organisations keep track of how they are progressing in relation to their current strategies and plans. Through monitoring progress, organisations can identify problems and issues that need to be solved, and understand the actions needed to correct them.

Learning And Sustainability

evaluation is crucial to enable the organisation to become a ‘learning organisation’. It supports continuous review and reflection and helps organisations to adapt to changing circumstances. By providing evidence of what works, evaluation supports organisational sustainability.

Guiding Principles in Evaluation

 Evaluation should be used not just as a retrospective tool to assess performance at the end of a fund-raising programme, but should be embedded within the organisation itself, to support a cycle of continuous learning and improvement, as well as put into place at the start of a fund-raising programme or other project

  • This means that evaluation should be used for four main purposes: a developmental purpose – to support the development of the organisation or a specific programme design and implementation plan (ex-ante evaluation); an operational purpose – to help the organisation or programme keep track of how it is progressing (on-going or ‘formative’ evaluation); a summative purpose – to help the organisation or a programme measure what is has achieved (ex-post evaluation); a sustainability purpose – to help key actors in the organisation or programme learn from their experience
  • There are many different methods and tools for collecting and analyzing evaluation data. Each has different purposes and different resource and skills requirements. The evaluation design and plan should take into account ‘pragmatic’ considerations: the ‘object’ of the evaluation; the purposes of the evaluation; the resources available to carry it out; who the evaluation audience is and what are their expectations; what evaluation skills are available in the programme, or can be brought in from outside; how long is the timeframe for the evaluation and what is it likely to cost
  • The evaluation should not just reflect the ‘expert’ view but should take a ‘participatory’ approach – trying to ensure that the voices of different stakeholders and their perspectives are represented – particularly those who have less power and whose voices are not often heard
  • This means that as far as possible evaluation data should be drawn from different sources and from different perspectives, and compared against each other, through ‘triangulation’, so that the evaluation reflects a balanced viewpoint.

 

Checklist of actions:

  • Identify the evaluation purposes, timeframe and modes of operation
  • Decide on who the audiences are and what are their expectations
  • List the evaluation questions the evaluation will answer
  • Decide on the indicators to measure results
  • Decide on the methods to collect and analyse the data
  • Work out what resources you need to do the evaluation
  • Produce a plan to carry out the evaluation and assign tasks and roles
  • Produce a plan to disseminate the results
  • Chen, H.-T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage Publications Inc.
  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousands Oaks, CA: Sage Publications Inc.
  • Stame, N. (2004). Theory-based evaluation and varieties of complexity. Evaluation, 10(1), 58-76.
  • Weiss, C. H. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87, 35–45