Hi friends! I hope the last few days of 2016 are wrapping up nicely for you. Earlier this year, I attended the American Evaluation Association’s Annual Conference in Atlanta, Georgia. The theme of this year’s conference was: “Evaluation + Design.” My biggest take away from the conference was the heavy focus on learning. There’s quite a lot to sift through here, so I’ll let the visuals speak for themselves. As always, if you have any questions or would like to learn more about the sessions I attended, please don’t hesitate to leave a comment.
Hello friends! A few weeks ago, I attended MERLTech 2016 and perused my visual notes from the MERLTech 2015 before the conference began. My key take aways from MERLTech 2015 (indicated by a green star in the notes) were:
- We don’t ask frequently enough how often M&E projects become self-supported and are sustained long term.
- Successful organizations focus more time/resources on training decision-makers how to interpret data and less on improving the accuracy of the data itself.
- When it comes to tech tools, it’s important to understand what participants are already using and use that platform to communicate/collect data.
- We are learning the same lessons over and over again. We end our reports with “lessons learned” but are we actually learning? (Spolier alert: No). How can we actually help our teams and organizations to learn?
- If we really want to have “locally-led” initiatives, we need to reverse our thinking around who “owns” the data we are collecting.
- Before putting time and resources into answering learning questions – check to see if others have sought to answer those same questions already.
- A well-functioning relationship between MERL and program management is critically important.
So, have we actually learned since MERLTech 2015? Take a look at the visual recap of MERLTech 2016 below and post your comments. What have you learned this past year? Stay tuned for a follow on blog post from me on USAID’s Learning Lab site!
Hi friends! Over the past couple of weeks, I have been doing a great deal of graphic recording and wanted to share some of my doodles with you all. I attended the International Forum for Visual Practitioners Annual Conference back in July and had the opportunity to try my hand (literally) at graphic recording for the first time ever (see the first picture below)! I learned so much. I left IFVP2016 feeling so incredibly inspired by the brilliant and warmhearted graphic recorders I met during the conference. Since then, my mind has been buzzing with ideas for how I can use and share my graphic recording skills for good! The most important thing I learned at the conference is that being a good graphic recorder has nothing to do with your artistic abilities (really!) and it’s an easily teachable skill. It’s all about being a good listener. I’ll be doing some formal and informal graphic recording trainings (one of which will literally be taking place in my living room!) over the next couple of months. Stay tuned for the recap!
Doodlin’ at IFVP (my first graphic recording ever!):
Doodlin’ during an event at my full-time job (one of the many reasons why I love my job!):
For a closer look:
I also did some doodlin’ at a recent event that my friends hosted about the importance of intersectionality in social movements. To learn more about what we discussed, take a look below:
Until next time! Happy September, friends!
It’s been roughly five years since the release of the USAID Evaluation Policy. USAID recently released, “Strengthening Evidence-Based Development: Five Years of Better Evaluation Practice at USAID” to renew the agency’s commitment to investing in high-quality evaluation practices that inform effective program management, demonstrate results, promote learning, and provide evidence for decision-making. The 226 page report details what USAID has learned since it first published it’s Evaluation Policy five years ago and how the agency can build and strengthen it’s evaluation practices. Diana L. Ohlbaum of CSIS wrote a brilliant reactionary piece that really resonated with me: USAID Evaluations at Five: Known Unknowns and Uknown Knowns. It is well worth the read!
A couple of interesting things to note about USAID’s evaluation practices covered in the report:
- Most of the evaluations were conducted late in the program cycle, so results were used for new project and activity design rather than for mid-course corrections.
- Over the past 5 years, USAID has relied almost exclusively on the use of impact and performance evaluations. For instance: 97% of the evaluations in the sample used for the report were performance evaluations.
- For the majority of cases, evaluations were being conducted at the individual activity and project level, where impacts tend to be limited, as opposed to at the sector or program level. Interestingly, the study could not find a single example of evaluation data being used to inform decisions regarding USAID policies themselves
- The study found that “learning is higher for USAID when country partners participate in the evaluation process.” However, only 24% of all evaluations in the study were planned with the involvement of the country partners.
At AEA2015 back in November, Micah Frumkin and Molly Hageboeck from Management Systems International presented the major findings from the report. Take a look at the visual notes from that session below!
Happy hump day, evaluation enthusiasts! On March 10, 2016, I participated in a CLA brown bag hosted by USAID LEARN during which Ella Duncan, Charles Christian, and Morgane Ortmans presented on Search for Common Ground‘s CLA work in Lebanon. The group spoke about how participatory, reflective practices are essential in SFCG’s work in peace-building. These practices help SFCG teams adapt to rapidly changing contexts and achieve sustainable impacts toward peace. This is especially true in Lebanon, where SFCG projects are addressing ongoing conflict factors, including working through tensions resulting from the influx of Syrian refugees, security sector reform, and women’s socioeconomic empowerment. Check out my thoughts on the session and take a look at the visual recap below!
A few weeks ago, I visually recorded a guest expert session for TechChange‘s 211 course on Technology for Data Collection and Survey Design. Gabe Krieshok, the ICT4D Advisor at Peace Corps, spoke about the challenges and considerations PCVs face when it comes to data collection and survey design in the field. Take a look at the highlights from the session below:
On February 11, 2016, I had the pleasure of attending a USAID presentation that sought to highlight the successful uses of CLA in the field and share lessons learned widely across the development sector to promote learning. Emily Janoch, Senior Technical Advisor for Knowledge Management and Learning at CARE, led a session on, “Putting Communities at the Heart of Learning and Adapting.” She spoke about how CARE is utilizing a Participatory Performance Tracker (PPT) to work together with communities to adapt programs and get better results.
The tool itself allows communities to sit down and talk through what activities they are and are not doing across different areas of their work. Unlike standard M&E practices, the PPT is housed by the community where it is being used and presents a powerful opportunity for reflection and group cohesion. The PPT is a unique data collection tool because in addition to collecting valuable data, it also provides a space for community members to hold one another accountable, offer support and assistance, and air their thoughts or ideas about what is or is not working about a particular project. Take a look at my visual recap of Emily’s brilliant session below:
I recently helped out as a course facilitator for TechChange’s TC111 course: Technology for Monitoring and Evaluation and learned a tremendous amount about the different options and tools that can be used for mobile data collection and analysis. Take a look at the visual notes from a session led by Amanda Berman, Senior Research Data Analyst at Johns Hopkins Center for Communication Programs, about M&E tech for the Ebola crisis response.
Happy Wednesday, friends! I recently helped out as a course facilitator for TechChange’s Technology for Monitoring and Evaluation course. Samir Doshi and Joshua Kaufman shared some interesting points about M&E practices at USAID’s Global Development Lab. The Lab is a new entity within USAID that brings together a diverse set of partners to discover, test, and scale breakthrough solutions with the goal of ending extreme poverty by 2030. Take a look at the notes below to get a sense of what was covered.
This past weekend, I had the opportunity to participate in AlterConfDC, a conference organized by AlterConf, that brought together an eclectic, brilliant, and passionate group of people to talk about inclusion and diversity in the tech and gaming industries. By highlighting the powerful voices and positive initiates of organizations and individuals in the DC area, the conference strengthened our community’s resolve to create safer, healthier spaces for all. I learned a tremendous amount from the presenters and participants and had many personal “aha” moments of my own. Take a look at the notes below to get a sense of what was covered: