Evaluation is the systematic, objective assessment of the design, implementation and results of an ongoing or completed project, programme or policy. It differs from monitoring in that it involves a judgement of the value of the activity and its results. Evaluations should be done for most reintegration programmes, with the type, scope, timing and approach dependent on its intended use.
The core functions of evaluations are to:
- Enable accountability and learning;
- Inform stakeholders;
- Provide empirical knowledge about what worked, what did not and why;
- Enable informed decision-making.
Evaluation criteria are standards by which an intervention can be assessed:
Relevance | The extent to which the objectives and goals of an intervention remain valid and pertinent either as originally planned or as subsequently modified. |
Efficiency | Helps analyse how well human, physical and financial resources are used to undertake activities and how well these resources are converted into outputs |
Effectiveness | The extent to which a project or programme achieves its intended results. |
Impact | The criteria that helps assess the positive or negative, and primary or secondary long-term effects produced by an intervention, directly or indirectly, and intentionally or unintentionally. |
Sustainability | Refers to the durability of project results or the continuation of the project’s benefits once external support ceases. |
Not every evaluation needs to focus on all these criteria. Depending on the scope of the evaluation, it might assess only some of them.
Evaluation mechanisms need to be integrated at the beginning of an intervention and be part of the initiative’s workplan and budget.
➔ Assessing the use of an evaluation
To understand how an evaluation should be set up it is necessary to assess how the evaluation findings will be ultimately used. To do this, ask three questions:
1. What information is needed? Examples:
- Information on the relevance of intended outputs or outcomes and validity of the results framework and results map;
- Information about the status of an outcome and factors affecting it;
- Information about the effectiveness of the reintegration partnership strategy;
- Information about the status of project implementation;
- Information on the cost of an initiative relative to the observed benefits;
- Information about lessons learned.
2. Who will use the information? Users of evaluation are varied but generally fall within the following categories: senior management, programme or project officers and managers. Others involved in design and implementation:
- National government counterparts, policymakers, strategic planners
- Donors and other funders
- Public and beneficiaries
- Academia
3. How will the information be used? Examples:
- To design or validate a reintegration strategy
- To make mid-course corrections
- To improve the intervention’s design and implementation
- To promote accountability
- To make funding decisions
- To increase knowledge and understanding of the benefits and challenges of the intervention
➔ Evaluation types are defined according to the timing of the evaluation and its purpose, who conducts the evaluation, and the methodology applied. According to the timing and depending on its intended use, an evaluation can be implemented before the start of a project (ex-ante), at the early stages of an intervention (real-time), during the intervention’s implementation (midterm), at the end of the intervention (final) and after the completion of the activities of the intervention (ex-post).
Evaluations can be conducted internally or externally, individually or jointly. Whether an evaluation is conducted individually or jointly also depends on available resources and how participatory the evaluation needs to be. It is highly recommended that the organization implementing the reintegration interventions takes part in evaluation.
-
An internal evaluation is conducted by project management. It is an independent internal evaluation if conducted by somebody who did not directly participate in the conceptualization or implementation of the intervention. It is a self-evaluation if done by those who are entrusted with the delivery of the project or programme.
-
An external evaluation is conducted by someone recruited externally, usually by the donor or the implementing organization. External evaluations require the recruiting of consultants and can therefore be more expensive than internal evaluations. These are considered independent evaluations.
Some general considerations when planning and conducting an evaluation are included below. These questions are examples so they are not extensive. Each intervention needs to define specific questions.
Table 5.5: Considerations for planning and conducting an evaluation
Question | Guidance |
---|---|
How to conduct evaluations? |
|
What questions should evaluations ask? |
Depending on the purpose of the evaluation, questions should address, for instance, a few questions per criteria: Relevance:
Efficiency:
|
Effectiveness:
Impact:
Sustainability:
|
|
How to define good practice? | Evaluations promote good practice and learning through the completion of case studies highlighting good practices, validation and ideally learning workshops with involved parties. In the field of reintegration, it is recommended to involve returnees and communities in both the data collection phase and workshop stage to share good practices. |
How to respond to and use evaluation findings? |
Evaluation findings should be discussed and responded to through:
|
How do we share findings from evaluations? |
|
A sample template terms of reference for an evaluation are included in Annex 4.C.
One evaluation approach with good potential for better understanding the intended and unintended effects of reintegration programming is the most significant change (MSC) approach. MSC involves generating and analysing personal accounts of change and deciding which of these accounts is the most significant – and why.
There are three basic steps in using MSC:
- Deciding the types of stories to collect (or stories about “what”: for example, about practice change, health outcomes or empowerment);
- Collecting the stories and determining which stories are the most significant;
- Sharing the stories and discussion of values with stakeholders and contributors so that learning takes place about what is valued.
MSC is not just about collecting and reporting stories but about having processes to learn from these stories – in particular, to learn about the similarities and differences in what various groups and individuals value.