Norwegian aid is wasted
Or isn't it? Do aid efforts to reduce child mortality actually work? Do projects aimed at female empowerment lead to less violence against women?
To answer such questions, development aid projects and programs must be designed in a way which makes it possible to evaluate them and to measure results. CMI and ITAD has won a project to evaluate the Norwegian aid administration to find out if current regulations and practices ensure evaluability of aid interventions.
If you want to measure the results of any activity, it is a great advantage to know what the starting point is, be it child mortality incidence, violence rates or poverty indicators. Even so, development aid interventions rarely include baseline surveys. Pinning down results that can be attributed to a particular aid intervention requires careful planning that should be integrated in the design of the project itself, argues Senior Researcher Espen Villager.
- Donors and policymakers often assume that the effect of aid is positive irrespective of what is funded, despite the fact that conditions for making anything work at all in many of the poor countries, are very difficult. In every rich country there is a profound debate about how to create jobs for people. In poor development countries, on the other hand, it is often taken for granted that a project designed to create employment will be successful - despite failures in this area in rich countries where most things are working well and the framework and infrastructure is in place. The need for documenting the actual impact of aid on people’s lives is often overlooked, says Espen Villanger.
In a new evaluation funded by Norad, CMI in close cooperation with ITAD, will evaluate the prerequisites for measuring the effectiveness of aid.
No documentation of effects of Norwegian aid
Evaluations of Norwegian aid projects have not been able to give clear indications as to whether aid interventions have the intended effects or not. As Norad’s Evaluation Department (EVAL) themselves stated - “none of the evaluations and studies commissioned by EVAL and finalized in 2011 could report sufficiently on results at the level of outcomes or impact”. The difficulty in finding clear indications of results could stem from a number of reasons.
A first reason could be with the evaluations themselves, or with the evaluation system. If the Norad staff responsible for commissioning evaluations are not aware of the strict methodological requirements for proper results assessment, then it is likely that the evaluation itself will not deliver accordingly. Even if the staff is able to enter the requirements into the contractors for evaluators, it is a necessity that the evaluators have the skills for conducting a proper assessment and that there is time and resources sufficient for such a mandate. Again, if this is not in place, it is very likely that the evaluation will not tell us anything useful of the results. In addition, there is a question of whether the evaluation system places enough emphasis on results measurement in the planning, commissioning and quality assurance of evaluations.
A second reason could be with the Norwegian aid administration as such – it could be that if the evaluation is not planned as a part of the intervention, then it could be impossible to identify the impacts of the aid projects at a later stage – or long after the project has closed.
To provide answers, the researchers from CMI and ITAD will analyse the existing Norwegian system for evaluating development assistance and the current practice. The work entails assessing recent evaluations that were intended to identify results and track the whole process from the planning of the evaluations to the final reports to find out where it went wrong - and why. Moreover, the team will map the aid administration’s procedures for ensuring evaluability and assess the practices and also conduct a benchmarking of the Norwegian evaluation system against the international best practice institutions. While the policies and systems provide the institutional framework for practice, the evaluation staff needs to have the skills to follow policies and put systems into place. And the organizational culture needs to support the implementation of the organisation’s policies and systems. Does the answer lie in the systems, the skills or the organisational culture? In other words, are the challenges to be found in the organisation’s hard ware or the soft ware?
-The mandate of this study is to find out why it is not possible to say very much about the effects of Norwegian aid despite a lot of resources going into evaluating those efforts, says Villanger.
More than 20 000 projects
During the different stages of the project, the researchers will evaluate data from Norwegian development aid projects and programs from 2008 till 2011. The database consists of 21 000 development aid projects, amounting to 26-27 billion NOK each year. A random selection of 20 projects will be assessed in detail in order to say whether it in reality is possible to conduct proper results evaluations.
The objective of the evaluation of Norwegian aid administration is to promote learning and progress by identifying reasons for the insufficient results documentation and to provide evidence-based recommendations for improvement. In the end, the aim is to improve development aid – which is difficult if the results of the assistance are not known.
-It is an enormous potential for learning from the failures of past evaluations and the design of aid interventions. This evaluation will result in an overview of the current regulations and practices that are used to ensure evaluability and analysis of what works and where there are impediments that hinder thorough results measurement. Throughout the project, researchers from CMI and ITAD will also provide evidence based recommendations that enable Norad to improve their evaluation policies and skills, says Villanger.