Revisiting The Frame For Evaluation

In Profiles & Interviews by Brenna Marea Powell4 Comments

reframe 2We can nearly wear the tread off of a word during cycles of popularity or necessary use. “Impact”  fits this category. Regardless of possible misuse or overuse, however, it’s worth the attempt to maintain a real definition and to use the word meaningfully. There is little option of abandoning it without playing word games.  For the social sector, “impact” is a uniquely complex proposition that involves various types of data and measures to demonstrate success. Evaluation is a route to defining impact clearly. Brenna Marea Powell, Associate Director, Stanford Center on International Conflict and Negotiation, resets the frame for Evaluation as we open this theme on Markets For Good.

I’ve noticed that the word “impact” can elicit groans.  Even worse—“impact evaluation”—which conjures up jargon-riddled efforts to quantify the unquantifiable and dull the feel-good factor.  Talk about impact is everywhere, but there’s a good deal of muddiness about what we’re doing when we try to measure impact, and a lack of enthusiasm about why we’d want to do so.

Meaningful impact evaluation is a learning tool, not a report card or a validation exercise.  It’s useful because it allows us to learn about what we’re doing, assess whether we’re achieving our outcome goals, and recalibrate our approach based on what we find.  It helps us be smart about what we do.  The alternative is fumbling around with the lights off.

So if you feel yourself confused or on the verge of a groan, here is a very quick framework that lays out the core elements of a meaningful impact evaluation in layman’s terms.

The purpose of impact evaluation is to understand whether a given initiative (or intervention, in more formal language) is having the broader effects it is designed to have.  A really good impact evaluation rests on four questions:

1) What are the intended effects of this intervention?

2) What is the evidence that this intervention is in fact having these effects?

3) What can we learn about why (or why not) this intervention is having these effects?

4) Is there any evidence that the intervention is having unintended effects we should care about?

No fanfare, no “Mission Accomplished” banners.  A strong evaluation will address these questions clearly and in simple terms.  Here are some keys to doing so.

Specifying effects.  The first question means spelling out what the intended effects are on which population/s, and on what kind of timeframe.  There may be multiple effects—many initiatives are designed to have layers of effects on individuals, households, and communities.  It matters less whether the intended effects appear optimistic, cautious, or naïve.  What is important is that they are explicitly articulated, because without a clear answer to this question there’s really nothing to evaluate.

Observing effects. We have to decide how we would recognize the effects if we saw them.  What are the observable features of the effects we’re looking for?  These might be indicators related to health outcomes or economic well-being, or behaviors and attitudes that can be observed and measured with a little creativity.  It’s less likely to be impossible than you think.

The right counterfactual.  Once we understand what effects we’re looking for, we want to know whether any effects we observe can be attributed to our intervention (as opposed to some other change going on in society).  This means finding the right counterfactual or comparison group.  We need to compare the treatment group (the population where the intervention has been implemented) to another group that best approximates our treatment group (only without the treatment).  There are different ways to do this depending on the context—randomization, careful matching informed by good contextual knowledge, or other techniques.

Identifying mechanisms.  Sophisticated tools like a randomized controlled trial can help you make causal attribution, but they don’t necessarily uncover mechanisms.  In other words, they can help you understand whether something is working, but they don’t always tell you why it is (or is not) working.  Varying the intervention across treatment groups to uncover aspects that may be more or less successful is a good idea.  Doing some good interview or survey work with participants in the study should be mandatory.  Quantitative rigor is important, really understanding how something works requires talking to people.

Uncovering unintended consequences.  Consciously looking for any unintended consequences is both smart to do, and something we owe the communities in which we work.  Finding unintended consequences requires asking.

There are different ways of doing good impact evaluation, and varying timeframes as well—ranging from rapid-feedback prototyping to long-term studies.  Most critically, understanding impact requires a real desire to learn and grow.  Answering the four questions I’ve laid out can be a guide to doing so.

(Visited 113 times, 1 visits today)

Comments

  1. This is a great summary of what it means to measure impact. I think there’s something implied here which is important to state clearly, however: that the kind of impact evaluation you describe is not cheap or easy. In fact, I would say that it’s well outside the budget and expertise of most individual nonprofits, and should be mostly the domain of the university and professional research world.

    Too many people are using the word “impact” carelessly, without regard to the careful definition you lay out here (and which I agree with), and say that nonprofits should be measuring “impact” when they mean short term indicators, or even participation, or they don’t even know what they mean. Which then clearly gives nonprofits a mandate that they should be measuring their own impact, when very few have the resources to do so at all effectively.

    Thanks for laying out this definition — I think it’s an important one.

    1. Laura — Many thanks for your comment, you raise two important issues.

      There are many important steps that nonprofits can take to get a better sense of their impact which cost relatively little but have potentially huge benefits. These may be short of a full-scale impact evaluation, but still help nonprofits be responsive and adaptive. For example, eliciting feedback from program participants or service users. Short surveys are easy, fast and cheap to implement, either online or via mobile phones.

      The second issue concerns expertise. In my experience, more often than not the most significant hurdle to effective impact evaluation is the first part discussed in the blog post — clearly and explicitly articulating what the intended effects are, and how you might know them when you saw them. Rather than the domain of academics and researchers, this is something nonprofits can and should do more of.

      Thanks again for your comments!

  2. This is so excellent– thank you for the very helpful links and resources!

  3. Excellent commentary on evaluation. I was particularly impressed with your breakdown in layman’s terms on the subject of impact evaluation. I plan to integrate if in with my team.

    Spot on!

Leave a Comment