Learning together, for advocacy

One of Beth Kanter’s posts on measurement within nonprofit organizations addressed the “data disconnect” between organizations and their funders.

She cites research that more than half of nonprofit leaders say that funders prioritize their data needs over nonprofits’ needs for information about their own work, an obviously concerning indicator, given what we know about the importance of data to inform good decision-making by nonprofits, in search of collective impact.

There are two key points from the post that I have been mulling over, especially as the Advocacy Evaluation Collaborative of which I am a part enters its second year.

First, it’s clear that nonprofit organizations want to use data more fully and more systematically to guide their work. Nonprofit leaders not only assert that, they are also dedicating some resources towards that end, which is probably even more clear proof that they’re serious. There are some real constraints, here, though, particularly the lack of financial resources, within most grants, specifically dedicated to evaluation. We see this in the policy world, too; there’s an assumption that, somehow, evaluation just ‘gets done’, when, in truth, there are often significant costs, in order to do it well. There is also some confusion about what, precisely, should be measured, but, to me, this isn’t as much a problem in the evaluation arena as in the context of pursuing impact itself. Because, really, once we’re clear about the changes we want/expect/hope to see come from a particular change strategy, it’s obvious what we’re going to measure: did those changes, in fact, happen? So, then, to the extent to which there is lack of clarity or even disagreement between organizations and funders about what should be measured, I think that reflects a larger chasm around what is supposed to be pursued.

Second, there is a risk that, as data are emphasized as part of the change process, there will be data collection for its own sake, with short shrift given to the analysis and utilization of data. And that’s a real problem, since, really, getting some data is not nearly as important as the ‘sense-making’ process–figuring out what the data are saying, and what to do about it, and what it all means. Especially when there are inadequate resources dedicated to evaluation, though, something will get squeezed and, if evaluation is conducted primarily in order to satisfy funders that it is, in fact, happening, then being able to produce data may be valued over really learning from the questions asked.

As I think back on this first year of working pretty closely with both advocacy evaluations and health funders in Kansas around advocacy evaluation, I’m relatively encouraged. There have been times when the process has seemed laborious, and I have felt particular kinship with the organizations, who have often struggled to make time to dedicate to the evaluation learning, in the midst of an especially tough policy climate in our state.

But I think we’re mostly getting these two critical pieces right, which is where I’m hopeful. There has been a lot of talk between funders and organizations about how to decide what to measure, and about the importance of agreeing, first, on a theory of change that is worthy of evaluation, and then letting the questions flow from that understanding of the impact that is supposed to occur. And the data collection piece is actually a fairly minor chunk of the overall workflow, with much more time spent on articulating those theories of change, analyzing data together, and finding ways to incorporate evaluation rather seamlessly into organizations’ flow of operations, in order to increase the likelihood that they do something with what they learn.

It’s that emphasis, I guess, which has made the difference: on learning, collaboratively, and on evaluation as a tool with which to move the work forward, rather than a hoop to jump through.

I don’t know how you take it to scale, really, this side-by-side process of those with money and those doing the work sitting down together to talk about what they need to learn and how to learn it. This process has only involved a few health funders in one state and six advocacy organizations, and it has still been pretty expensive and time-consuming. But maybe, through peer sharing, nonprofit organizations will come to demand this kind of transparency and collegiality. And foundations will come to expect that they can be part of the learning too. And, together, we can harness the power of inquiry to get better at what we are, together, trying to do.

Change the world.

Advertisements

One response to “Learning together, for advocacy

  1. Pingback: Practice Reflections: Advocacy Evaluation | Classroom to Capitol

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s