It’s been roughly five years since the release of the USAID Evaluation Policy. USAID recently released, “Strengthening Evidence-Based Development: Five Years of Better Evaluation Practice at USAID” to renew the agency’s commitment to investing in high-quality evaluation practices that inform effective program management, demonstrate results, promote learning, and provide evidence for decision-making. The 226 page report details what USAID has learned since it first published it’s Evaluation Policy five years ago and how the agency can build and strengthen it’s evaluation practices. Diana L. Ohlbaum of CSIS wrote a brilliant reactionary piece that really resonated with me: USAID Evaluations at Five: Known Unknowns and Uknown Knowns. It is well worth the read!
A couple of interesting things to note about USAID’s evaluation practices covered in the report:
- Most of the evaluations were conducted late in the program cycle, so results were used for new project and activity design rather than for mid-course corrections.
- Over the past 5 years, USAID has relied almost exclusively on the use of impact and performance evaluations. For instance: 97% of the evaluations in the sample used for the report were performance evaluations.
- For the majority of cases, evaluations were being conducted at the individual activity and project level, where impacts tend to be limited, as opposed to at the sector or program level. Interestingly, the study could not find a single example of evaluation data being used to inform decisions regarding USAID policies themselves
- The study found that “learning is higher for USAID when country partners participate in the evaluation process.” However, only 24% of all evaluations in the study were planned with the involvement of the country partners.
At AEA2015 back in November, Micah Frumkin and Molly Hageboeck from Management Systems International presented the major findings from the report. Take a look at the visual notes from that session below!