Module 5: Monitoring and evaluation for reintegration assistance

5.5 Learning and generating knowledge from monitoring and evaluation

One of the most direct ways of using knowledge gained from M&E is using it to inform ongoing and future planning and programming. Lessons from evaluations of programmes, projects and initiatives – and management responses – should be available when new outcomes are being formulated or projects or programmes are identified, designed and appraised.

Institutionalization of the learning process can be achieved in part by better incorporating learning into existing tools and processes. As addressed in the first section, results-based management is an effective approach to cultivating organizational learning throughout programming. Knowledge products can take many different forms depending on the audience and its information needs. For meaningful learning and knowledge sharing, knowledge products should be high quality and have a clearly identified audience and purpose. A good knowledge product, including a good publication, is:

  • Based on demand for the product among targeted users (this means that the product will be relevant, effective and useful);
  • Designed for a specific audience;
  • Relevant to decision-making needs;
  • Written in clear and easily accessible language, with data presented clearly;
  • Based on an unbiased evaluation of the available information.

As stated above, a good practical way to use collected data and findings in evidence-based programming is to have a strategy for communicating findings and good practices. This could be through webinars, workshops, production of flyers and infosheets on findings.

In conclusion, to sum up this module, M&E process throughout an intervention follows these key stages:

Reintegration programming stages M&E process
Planning

1. Review learnings from previous initiatives, including information from already conducted M&E activities if available.

2. Clearly define the overall objective and the results the reintegration intervention hopes to achieve. This is achieved, for instance, by creating a theory of change or a logical framework.

3. Develop and define relevant indicators. Start creating the data collection and analysis plan at this time.

4. Identify if an evaluation or review will be used for this intervention.

5. Assess budget required and who will need to be involved in the M&E activities.

Startup

6. Finalize monitoring data collection and analysis plan. Start thinking about this during indicator selection and project design.

7. Establish a baseline within two months of starting implementation. Exact timing for baseline data collection can vary, depending on the intervention.

Implementation

8. Collect data from different sources, using different methods. It is recommended to use a “mixed method” approach for data collection and monitoring. This combines quantitative and qualitative methods.

9. Analyse, interpret and share findings. Data collected should be used to inform good practices and evidence-based programming.

Closure and review

10. Review and evaluate. Reflect on the intervention’s achievements and lessons learned and use this information to shape future interventions.

Impact evaluation

“Impact evaluations are a particular type of evaluation that seeks to answer a specific cause-and-effect question: What is the impact (or causal effect) of a program on an outcome of interest? This basic question incorporates an important causal dimension. The focus is only on the impact: that is, the changes directly attributable to a program, program modality, or design innovation.”50

For more information: www.youtube.com/watch?v=HEJlT8t5ezU

50 Gertler, P., S. Martinez, P. Premand, L. Christel and M. Vermeersch, Impact Evaluation in Practice. World Bank Group (New York, 2011).