Accountability is not just a buzzword. It is the summary focus on what we have actually done to change lives through our work. Nicholas Van Praag of Keystone Accountability shines a sharp light on accountability in the social sector, through the lens of humanitarian efforts and the current state of feedback from the proposed recipients of our services as it informs our programming and reporting.
Increasingly, these days, the term we hear being bandied about in the now $17 billion-a-year humanitarian industry is accountability. The problem is that for the most part, the concept translates into turgid, not-terribly-useful evaluations—many of which take months or even years to complete—and PR anecdotes about so-called “results.”
What is missing is a systematic approach to assessing humanitarian operations through the eyes of the 62 million people who, according to UN figures for 2012, are in need of humanitarian assistance. After all, to run an aid program without understanding how beneficiaries perceive it is to ignore the simplest test of client satisfaction. It’s quite amazing, in fact, that donors have been willing to make funding decisions without any customer input for as long as they have.
A wide range of push factors continue to drive humanitarian assistance – from war and conflict, food insecurity, and natural disasters to climate-induced population displacement. These stresses have given rise to a doubling of official humanitarian aid since 2000. And with the increase in cash has come a growing demand for accountability. The number of evaluation reports continues to multiply, but they are mostly commissioned piecemeal by the aid agencies themselves or by their donors who are interested in the bits they fund directly.
According to the 2012 State of the Humanitarian System report,”a significant number of evaluations (2009 to 2011) found that interventions were appropriate to the needs of recipient communities. As justification, they most commonly cited that community or local government priorities had been met.” These findings contrast with field-based surveys undertaken for the same report that revealed that two-thirds of respondents said they were dissatisfied or only partly satisfied with the amount and quality of the aid they received.
There is no generalizable evidence yet showing that beneficiary feedback in humanitarian programs translates into improved outcomes. This is something we will test when enough organizations start to ask aid recipients the right questions—about the quality, timeliness and relevance of aid, and whether they trust those providing it—and we can correlate the feedback with objectively verified outcomes.
Meanwhile, the communications efforts of aid agencies increasingly are focused on trumpeting the impacts of their programs. Without the perspective of the beneficiaries themselves, though, how can these organizations make necessary mid-course corrections or assess whether their claims about results are borne out by the reality on the ground?
The growing attention to recipient accountability remains biased toward the supply-side of the aid equation. A lot of effort has gone into developing standards about the ‘what’ of accountability, offering certification to agencies apt to respect these standards, for example, and building their capacity to adopt best practices. This is crucial work, but there is little or no compulsion to follow the path towards greater accountability to beneficiaries.
How, then, to build the demand side of the equation so that listening to beneficiaries is no longer something optional? While information on citizens’ perceptions and customer preferences is routinely gathered in high- and middle-income countries, only rarely is it assembled from refugee camps and displaced-people’s shelters.
In the past, this type of data was missing partly because we didn’t have the tools to gather it. Today we do. With mobile phones and other technologies that are readily available in most parts of the world, gathering information from beneficiaries about their experience of relief efforts can be done relatively cheaply. Short surveys based on the customer-satisfaction model can be repeated often with these devices, thus providing a continuous stream of comparable data in real-time about, for example, whether beneficiaries believe that their welfare and safety are paramount concerns for those assisting them.
Eliciting feedback is one thing. Responding to it adequately is another. We’ve seen that if aid agencies ask beneficiaries questions and do not address the concerns they raise, survey participation rates drop off rapidly. On the other hand, if they pick up on the data and take corrective action, recipients are increasingly keen to provide feedback because they understand it makes a difference. It is not about collecting data, in other words. It’s about acting on it. This means creating the incentives for organizations to listen, to act on what they hear, and then to close the loop by checking with those providing the feedback that the actions being taken by them meet the concerns they’ve expressed.
We have found that many organizations want to be more responsive to the people they exist to help, and that they readily embrace new ways of obtaining data directly from them. Others are mired in the old ways of what is now a mature industry and will need encouragement to do the right thing. One way to push them along is to publish beneficiary feedback data in an index that rates the performance of organizations based on the perceptions of the clients themselves. This would ensure that people receiving humanitarian assistance and protection are given a voice on the things that matter most to them. It would also provide donors with the opportunity to reward those organizations that can show they are truly doing what their beneficiaries need them to do and, crucially, improving outcomes as a result.