Too often, evaluations are treated as a ‘slap stick’ exercise. Results may point to critical flaws in the program or intervention but there is resistance from project custodians who may not agree and have a different interpretation. In contrast to the old saying that “we learn from our mistakes”, fear of failure and entrenched views can stand in the way meaningful learning. But it’s a two way street, and both clients and service providers bear responsibility in the process.
When doing monitoring and evaluation work, we always need to keep in mind that once the evaluation has finished the work for the client is just about to start. We move from evaluation to implementation. Here follows some useful hints that can help.
- Preparation is everything. In high school our sports coach used to say: “practice don’t make perfect, perfect practice make perfect”. The same logic applies to evaluations. Invest time and effort up front with proper desk reviews and so forth. From the client side, ensure the project timeline allows for proper preparation. Cutting corners at this stage will only add time towards the end and results may not be in line with expectations.
- Hold a developmental workshop. If appropriate, bring project custodians and key stakeholders together in a workshop at the outset of the project. Use the workshop to share and discuss critical issues, gain input and acceptance of the methodology, and cover practical and logistical matters. More importantly, gain ‘buy-in’ from stakeholders by getting to know them and understand how they think and what their expectations are. With a bit of preparation, it is amazing how much can be achieved in a workshop like this.
- Identify critical performance areas. If preparation work is done correctly, you will be able to anticipate critical performance areas and can begin a discussion around these early in the process. Questions about tools to be used can then be formulated to explore these specifically. By bringing potential issues out in the open at an early stage people become sensitized to them and talking about them becomes less threatening.
- Look for balance in results. Even poorly executed programs and interventions will have some strong points. It is very important to balance critical improvement areas with areas of strength that can be built on in the future. Programs are similar to people, as humans we develop by harnessing our strengths and improving or compensating for our weaknesses. If a critical weakness or issue comes up it is a good practice to share it as soon as possible. This will allow people to comment and to contemplate the result and its implications before reading the final report.
- Make realistic and actionable recommendations. This is easier said than done but is the hallmark of good evaluations. When ‘lessons learned’ are confused with recommendations (this happens!), it is often a result of inadequate preparations and results become a ‘blur’ between what goes to the heart of the evaluation objectives and what resulted from unforeseen events.
By following these simple steps, by the time results are presented there should be few surprises and those involved will already be working in the spirit of collaboration. Even for rushed projects these steps can work. In such cases consider a few consultation interviews to compliment the desk review and, instead of a workshop, maybe do a couple of conference calls. Check out this award winning case study on Collaboration Learning and Adaption (CLA) based on a study done for USAID https://rapid-asia.com/rapid-asia-on-the-go/cla-case-studies-complex-problems-needs-simple-solutions-by-rapid-asia/
About the Author: Daniel Lindgren is the Founder of Rapid Asia Co., Ltd., a management consultancy firm based in Bangkok that specializes in evaluations for programs, projects, social marketing campaigns and other social development initiatives. Learn more about our work on: www.rapid-asia.com.