What do we really mean by timing in ECD? As demonstrated by the earlier and many other ECD stories that we collected over the last few months, there are two different elements to ECD timing. First, it matters when you start ECD and how ECD is eased into the organization’s schedule. Second, it counts how you chronologically sequence the activities of which your ECD program consists.
This lit review is neither systematic nor comprehensive; however, it does provide a fairly good review of some of the more substantive engagements with the idea of evaluative thinking that have appeared in evaluation journals, books, and reports over the past few years.
This blog post, and subsequent feedback and comments, provides a useful starting point for understanding how to build individual as well as organizational capacity to "engage in evaluation practice."
This event will introduce the Contribution to Change Guide. Oxfam (on behalf of the ECB agencies) and in partnership with the University of East Anglia/OPM have produced a guide that aims to provide one reliable and practical method for...
This blog post highlights a range of online learning opportunities from BetterEvaluation and its partners, including EvalPartners, MEASURE Evaluation, American Evaluation Association and Quebec Evaluation Association.
In addition to upcoming courses, the list includes links to past training materials, YouTube channels and webinar recordings that are available for free download.
The Regional Monitoring and Evaluation Advisor (P5 level), provides authoritative advice and cutting-edge technical expertise to the Regional Director, Deputy Regional Director, Country Offices and g...
This resource guide covers: functions and principles, stakeholders, program logic, evaluation questions, monitoring plan, evaluation plan, data collection and management, learning and communication strategy and the format of the framework.
Synthesis of diverse consultations facilitated by USAID's SEEP network between 2010 and 2012
The paper discusses the key obstacles in the current M&E paradigm and suggests seven principles for building "usable systemic M&E frameworks." The paper concludes with some concrete ideas and proposals to help practitioners develop their own M&E frameworks.
This toolkit aims to support communication-for-development organizations to critically reflect on their work.
The six-module toolkit takes organizations through the process of including external stakeholders in their monitoring and evaluation work. The toolkit mostly makes us of qualitative approaches in order to better capture "new and unexpected" insights into the subtle processes that are involved in social change. The toolkit can therefore be used to enrich more conventional quantitative evaluation methodologies.
An evaluation tool that encourages respondents to think about the degree to which a programme contributed towards change, with a mechanism for verifying such "impact stories."
Stories are not often considered to offer robust evidence of impact of a development evaluation. This new tool aims to provide guidance for developing rigorous "impact stories" that could be used as part of an evaluation. The methodology guides project participants through the process of sharing "examples of changes in skills, knowledge, attitudes, motivations, individual behaviors or organizational practice." A link to the full paper introducing the methodology, co-authored by B. WIlliams and T. Chilaika, is also provided.
The toolkit contains guidance and tools on how to plan, design, implement, monitor and evaluate advocacy strategies to promote national evaluation policies and systems that are equity-focused and gender-responsive.
This toolkit aims to support evidence-based policymaking, transparency and learning by helping users understand the role of advocacy in increasing demand for evaluation.
The toolkit provides a series of incremental steps that can be taken to effectively advocate for national evaluation policies and systems that are equity-focused and gender-responsive.
In one of his grumpier moments, Owen Barder recently branded me as ‘anti-data’, which (if you think about it for a minute) would be a bit weird foranyone working in the development sector. The real issue is of course, what kind of data tell you useful things about different kinds of programme, and how you collect them. If people equate ‘data’ solely with ‘numbers’, then I think we have a problem.
Do you need to build effective monitoring and evaluation into project and programme work for both accountability and learning? This programme will strengthen your skills in supporting the monitoring and evaluation of projects and programmes from programme design through to evaluation.
In this blog, an Oxfam staffer attempts to synthesize some lessons from a two year project at the organization to better understand and communicate a better at understanding and communicate the effectiveness of its work globally. At the heart of this investigation is exploring monitoring tools that can help address the "...inherent tension between organisational accountability and programme learning."
This CGD paper examines how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts.
The 17 guidance notes in this publication explain and demonstrate how to assess capacity development efforts by reviewing and documenting the results of ongoing or completed capacity development activities, projects, programs or broader strategies.
The Guide is based on the World Bank Institute's Capacity Development and Results Framework (CDRF), which was designed to provide a systematic approach and a set of tools for development practitioners to design a rigorous yet flexible capacity development strategy. The CDRF can be used both to test programme logic ex ante, and to measure and evaluate results ex post.
Review of new ODI publication: "Unblocking results: using aid to address governance constraints in public service delivery." The paper presents the results of a study that analysed the role of aid agencies "when things go well" based on four success stories in Tanzania, SIerra Leone and Uganda.
The webinars present two participatory methodologies for developing and monitoring against community-based adaptation (CBA) indicators, enabling local stakeholders to articulate their own needs, a fundamental part of building and strengthening adaptive capacity.