In The Death of Evaluation, Andrew Means writes an obituary for “traditional, social science driven program evaluation.” His second post, The Role of Data, more finely articulates his argument. This post is my reaction to both, as well as my reflections on the appropriate role of evaluation and data in applied nonprofit settings.
First, some background on my perspective. I straddle the evaluation world and the nonprofit performance measurement and management world; I think both about technical and theoretical approaches to program evaluation, as well as applied management uses for data in day-to-day decision making within a nonprofit. I am both an evaluator and a non-profit leader.
I fully agree that much of the professional evaluation sector is guilty of the faults Mr. Means lays at our feet, and we deserve to be challenged with the points that he articulates. However, I disagree with the unilateral dismissal of ‘traditional’ evaluation based on his understanding of how the world has changed and why evaluation no longer fits our modern world. Other commenters have already pointed out the insufficiency of his definition of evaluation and I will not belabor that point. While I agree with his premise that the world has changed and that program evaluation is insufficient to meet the multiplicity of needs for evidence that nonprofits now have, let’s not throw the baby out with the bath water. Just because most of us type on keyboards doesn’t mean the humble ‘traditional’ pen doesn’t have its uses.
In his follow up post, Mr. Means pits evaluation and analytics against each other, making the argument that program evaluation “is often not forward looking, is over-focused on statistical proof, and actually undermines program improvement” while on the other hand,analytics facilitates improvement, is conducive to ongoing decision making, and is predictive rather than inferential. He insinuates that program evaluation needs to become more like business analytics. It’s not that I disagree with some of his points – in fact, for many of them I agree completely. But he is setting up a false dichotomy; It isn’t about whether analytics or evaluation is ‘better’ than the other, it’s about the appropriate uses of each. As Mario Morino writes in Leap of Reason,
The simple question that has served me best throughout my business and nonprofit careers is ‘To what end?’
The debate between evaluation and analytics is not an “either/or” proposition, it is a “yes, and…” one. There are situations where a summative evaluation (yes, even a randomized control trial) is the best approach, and other times when daily monitoring of analytical dashboards is best. It all depends on your goals and resources.
My last criticism is about his assertion that evaluation undermines efforts to improve. That is like saying textbooks inhibit our ability to learn. Evaluations can be used in any number of ways, just like textbooks can. Evaluations can lull us into a false sense of security or be used to limit critical thinking, much like textbooks can be used to facilitate rote memorization. Or evaluation, like a textbook, can be used to facilitate authentic and true learning and improvement. It is not about the tool itself, but the application of that tool in the real world.
So what do I think is the proper role of evaluation and data in the nonprofit sector? Well, I’m glad you asked!
If the title of this blog doesn’t give it away, I may have a philosophical tilt towards evidence based decision making in nonprofits. Yes, I am a trained evaluator, and yes, I design and build organizational dashboards, but that doesn’t mean I think that those are the one-size-fits-all solutions for every decision making opportunity. Here’s what I think needs to happen:
- Nonprofit professionals must care about validated data and evidence, whether created through an evaluation or not. You owe it to the communities and clients you serve to do no harm, and to provide the highest quality, most effective services you can with the resources you have; and refusing to accurately understand whether you are is willful ignorance. But having validated evidence does not replace the need for good judgement. Whether you approach building evidence through evaluation, or analytics, or quality assurance (or some other way) will depend on your capacity, resources, organizational culture, and priorities.
- Evaluators need to be more nuanced in their understanding of what a ‘good’ evaluation is, understanding that our traditional model of evaluation does not work in all settings. Within the context of nonprofit organizations, this means understanding the balance between statistical validity and real world application. Basically, read up on Utilization Focused Evaluation. Most evaluators already know this, but it never hurts to remind ourselves that we exist not because the world needs more evaluation reports, but because the world needs more knowledge. Our goal should be the creation and use of that knowledge, not the perfect evaluation report. And we evaluators are not the only generators of relevant knowledge for our stakeholders, let’s not forget.
So yes, Mr. Means is right when he says the traditional approaches of evaluation aren’t enough. But that doesn’t mean that evaluation is dead, nor does it mean that evaluation is the unidimensional approach that he assumes it to be. The original conceptualization of evaluation has its roots in Lyndon Johnson’s Great Society, and sure, that original vision may be antiquated and inflexible. But if that form of evaluation is dead, then long live the forms of evaluation that are now emerging from the great work that nonprofit and public sector leaders and evaluators are engaging in all across the world.
This post was originally published on The Measured Nonprofit, by Patrick Germain as a direct response to Andrew Means’s two articles, The Death of Evaluation and it’s follow up, The Role of Data. Focused on “improving lives through evidence based decision making”, we thank Patrick for his time responding to Andrew, and for continuing the debate. Let us know if you think