If you haven’t commented yet this week, this post is your last chance! (Except, of course, that you can go back to post on one of the other two!). Tomorrow, I’ll announce the winner of the free copy of The Networked Nonprofit!
There is a lot of content in the book about how organizations can, and should, approach social media as a sort of experiment, building in mechanisms that will help them to learn quickly, and well, from what they’re trying, so that they can modify it as needed. They stress a real intentionality in this approach, an emphasis, from the very beginning, on defining what it is that we hope to accomplish, and the measures that we’ll use to help us get there. They also create space, though, for different organizations (or even different campaigns within the same organization) to define “success” differently, and they caution against reducing social media to a mere numbers game.
As I wrap up a contract evaluating an advocacy initiative for a foundation here in Kansas, and continue my reading, speaking, and contemplating about how to evaluate advocacy, and why such evaluation is so important, there is a lot from the evaluation discussion in The Networked Nonprofit that I believe applies to this endeavor of advocacy evaluation, too.
Foremost is the idea that evaluation should be actionable, that is, evaluation should give practitioners real information that they can really use, and be imminently valuable to them as a real-time check on what they’re trying. Having such information not only improves practitioners’ ability to change what’s not working, but also serves to increase organizations’ willingness to take risks (like trying advocacy or social media), because there’s comfort in knowing that we’ll be able to tell what’s working and what’s not.
They call this “learning loops”, and the way that they talk about it will sound very appealing, I believe, to anyone who has participated in the “other” kind of evaluation–that which is designed by a third party to meet a donor’s, not the constituents’ or the practitioners’, needs for information, that which produces a bound report years after anyone stopped caring (or even remembering) what is being evaluated, and that which uses criteria that don’t remotely resemble ‘success’ according to the perspectives of those really doing the work.
The details on learning loops, below, come from Kanter’s work, but this is my conceptualization of how the idea applies to advocacy evaluation, and how it differs from “traditional” evaluation.
There is still a lot that’s hard about evaluating advocacy, and there are still a lot of variables that impinge on our ability to measure precisely the impact of our interventions.
Still, this kind of advocacy evaluation, woven seamlessly into the practice of advocacy itself, holds tremendous promise for overcoming our collective resistant to the idea and, therefore, beginning to build a body of knowledge that will help us get better at doing advocacy evaluation.
And it starts with changing how we think about evaluation, not as a hoop through which some funder says we must jump, but instead as a part of the process of social change, and one that gives us another tool through which to improve our work.
If you’ve been a participant in either approach to evaluation, especially evaluating advocacy or social media efforts, how were those experiences? How might you implement learning loops in your organization, specifically in your advocacy? How does this change how you think about evaluation?