Resources
- Publication
- |
How do marginalized communities build systemic power that is enduring rather than merely episodic? For many grassroots organizers, the answer is deeply rooted in culture.
FILTER BY:
- Presentation
This presentation provides the context for the 2014 Evaluation Roundtable convening, as well as discussion outlines, benchmarking data, case examples, and key lessons and implications for evaluation and learning.
- Publication
This framing brief developed for the 2014 Evaluation Roundtable convening explores whether and how sectoral shifts in strategy mindset and practice toward more complexity and emergence call for changes in the role of evaluation and learning in foundations.
- Publication
Decades of research have shown that despite the best of intentions, and even when actionable data are presented at the right time, people do not automatically make good and rational decisions. This brief highlights common cognitive traps that can trip up philanthropic decision making, and suggests straightforward steps to counter them.
- Publication
Evaluation for strategic learning is the use of data and insights from a variety of information-gathering approaches—including evaluation—to inform decision-making about strategy. This brief explores organizational preparedness and situational suitability for evaluation that supports strategic learning, and how to understand if this type of evaluation is working.
- Publication
The promotion and protection of human rights around the world is driven by principles of transparency and accountability. These same principles drive monitoring and evaluation (M&E) efforts. Yet, conceptual, capacity, and cultural barriers often discourage the use of M&E in human rights work. This brief offers concrete examples of how to tackle the unique challenges of evaluating human rights work.
- Publication
One of our most popular publications, this brief, produced in collaboration with ORS Impact, summarizes 10 theories grounded in social science about how policy change happens. The theories can help to untangle beliefs and assumptions about the inner workings of the policymaking process and identify causal connections supported by research to explain how and why a change may or may not occur.
- Publication
How can foundations avoid the traps that sabotage their learning and hamper their ability to guide strategy in complex contexts? This article explores a series of self-created “traps," including 1) linearity and certainty bias; 2) the autopilot effect; and 3) indicator blindness.
- Presentation
This presentation, developed for the 2012 Evaluation Roundtable convening, examines how foundations structure their evaluation and learning functions, invest in evaluative activities, and use evaluative information. Findings are based on surveys from 31 foundations with a strong commitment to evaluation – and 38 foundations that participated in interviews.
- Publication
Conventional program evaluation is a poor fit for the uncertain and emergent nature of innovative and complex initiatives. Developmental evaluation offers an alternative. This article offers five practices to help developmental evaluators detect and support opportunities for learning and adaptation leading to right-timed feedback.
- By Tanya Beer
- Publication
Evaluation in philanthropy--with staff assigned to evaluation-related responsibilities--began in the 1970s and has evolved, along with philanthropy, in the decades since. This Foundation Review article presents findings, based on 2012 research, about what foundations are doing on evaluation and discusses their implications.
- Presentation
Drawing on benchmarking data gathered from the Evaluation Roundtable network, this presentation examines organizational barriers to learning from strategies, warns against cognitive traps that hinder learning and decision-making, and describes approaches to avoid or counteract those traps. It explores how the culture and role of evaluation in foundations can disconnect learning from strategy.
- Publication
This Foundation Review article outlines the difference between evaluation for two main types of grantmaking programs: models, which provide replicable or semi-standardized solutions, and adaptive initiatives, which are flexible programming strategies used to address problems that require unique, context-based solutions.