BLOG

Measuring to Learn

Last fall, the Rita Allen Foundation gathered six emerging nonprofits to talk about performance measurement. Each of the organizations works on some aspect of civic engagement. One, for instance, based in Seattle, seeks to change attitudes and action through environmental news, with a twist of humor. How do you know if that’s working? Over the course of several workshops, our conversations were fruitful but difficult. Conversations about measurement usually are.

Bill Gates caused a stir with his recent annual letter calling for more and better evaluation of programs working on poverty, health, and education. Objections included that his business-centric model is flawed, that data always carries an agenda, and that what’s really important won’t be measured.

Doing effective evaluation in the social sector is inherently difficult. The Center for Effective Philanthropy’s Phil Buchanan runs through some of the problems: “Attribution—and even contribution—is exceedingly challenging to pinpoint. The counterfactuals are often impossible to know. There is no common unit of measurement across myriad programs—no analog to ROI—and there never will be.” At PopTech, Andrew Zolli adds to the list of potential problems: failing to establish a control group, practice effects, regression effects, placebo effects, compensation effects, and selective dropout, for starters. If your project is aimed at changing public opinion or policy, things get even more complicated. And for an emerging organization, time and money are needed for many other urgent things besides evaluation—like actually carrying out the work to be evaluated.

The leaders who participated in our fall capacity-building program on measurement and evaluation are fully aware of these barriers, but they are still deeply committed to finding effective means of evaluation. They want to demonstrate to others the value of their work, but most of all they are interested in knowing how they can be more effective. They’re in a position not so different from the biomedical researchers we support as Rita Allen Foundation Scholars: they want to measure to learn.

Some lessons emerged from their sessions of collaborative learning and planning (with the assistance of Root Cause) that should be shared with other emerging organizations who want to make measurement and evaluation work for them.

Define what success means to you. Organizations seeking to create change in society have ambitious goals by definition. An important first step in effective evaluation is thinking about the pieces of this broad vision. Imagine the world once you have been successful. What are its specific characteristics? How did they get there? Defining on-the-ground goals doesn’t mean they can’t change—they probably should change, or at least develop more detail, as you test what works and as the context shifts around you. But having goals will help you know signs to look for to reveal progress and setbacks.

Ask what you need to know. Given the difficulty of evaluating impact in the social sector and the limited staff time and resources available for evaluation, every measurement effort should be directed at answering your most important (but measurable) questions. Are we reaching our intended audience? Are we creating the new connections we wanted to? After becoming involved in our program, do people change their behavior in the way we expected?

Find ways of measuring that help. Particularly for emerging organizations, it’s important to find ways of measuring impact that provide useful insight with available resources. The Hewlett Foundation recently shared an immensely helpful internal paper on evaluation that offers this sound principle: “Our goal is to maximize rigor without compromising relevance.” Combine quantitative data with the qualitative explanations. When randomized experiments would be difficult, at least go through the process of considering why your first explanation of your results might be wrong and what other factors might be at play. Particularly with advocacy, Steven Teles and Mark Schmitt convincingly argue, effective evaluation is more of a craft than a science.

Consider the role you play in a bigger network. One breakthrough solution isn’t enough to solve complex problems; they require many people over time building up innovations and finding new ways of working together. Understanding where you fit in a network of interventions can help connect measurable goals (How many people are sharing content from our website?) with a big social mission (Change how people treat the environment). A good way to start is by creating a detailed map of players in the field in which you work, which may also reveal potential collaborations or existing resources to leverage.

Get your funders on the same page. A recent Center for Effective Philanthropy report confirmed what we’ve heard from many organizations working on social change: funding for conducting evaluations is in short supply. Of nonprofits surveyed, 71 percent reported receiving no funding or support from foundations for their assessment efforts. Not only do they not receive support for evaluations driven by their own needs; they don’t receive support for evaluations of their work required by funders. More than half of nonprofits also want more discussion with their funders about assessment, including what data to collect and how to develop the skills of staff to collect and interpret data. Funders should be taking the lead on opening up these conversations and providing the resources needed for effective evaluations. Organizations can help, though. Be open about the importance of measurement to your work and your need for support in this area. Start conversations early about what assessment your funders will expect and what resources it will require of you. And have regular conversations about how you define success for your work.

For the leaders and organizations we work with, failure is a big part of success. We invest in high-risk, innovative approaches to some of the most tricky problems out there. Whether in cancer research or civil society, progress comes with trying many approaches that might work, making a lot of adjustments, and finding a few that do. Observing why some approaches don’t work builds further knowledge of a complex environment. Failure can be just as useful as success—if you’re ready to learn from it.