Evaluating Advocacy: Of jumping hoops and learning loops

photo credit, 2007 Powwow, Smithsonian Institution via Flickr Creative Commons

If you haven’t commented yet this week, this post is your last chance! (Except, of course, that you can go back to post on one of the other two!). Tomorrow, I’ll announce the winner of the free copy of The Networked Nonprofit!

There is a lot of content in the book about how organizations can, and should, approach social media as a sort of experiment, building in mechanisms that will help them to learn quickly, and well, from what they’re trying, so that they can modify it as needed. They stress a real intentionality in this approach, an emphasis, from the very beginning, on defining what it is that we hope to accomplish, and the measures that we’ll use to help us get there. They also create space, though, for different organizations (or even different campaigns within the same organization) to define “success” differently, and they caution against reducing social media to a mere numbers game.

As I wrap up a contract evaluating an advocacy initiative for a foundation here in Kansas, and continue my reading, speaking, and contemplating about how to evaluate advocacy, and why such evaluation is so important, there is a lot from the evaluation discussion in The Networked Nonprofit that I believe applies to this endeavor of advocacy evaluation, too.

Foremost is the idea that evaluation should be actionable, that is, evaluation should give practitioners real information that they can really use, and be imminently valuable to them as a real-time check on what they’re trying. Having such information not only improves practitioners’ ability to change what’s not working, but also serves to increase organizations’ willingness to take risks (like trying advocacy or social media), because there’s comfort in knowing that we’ll be able to tell what’s working and what’s not.

They call this “learning loops”, and the way that they talk about it will sound very appealing, I believe, to anyone who has participated in the “other” kind of evaluation–that which is designed by a third party to meet a donor’s, not the constituents’ or the practitioners’, needs for information, that which produces a bound report years after anyone stopped caring (or even remembering) what is being evaluated, and that which uses criteria that don’t remotely resemble ‘success’ according to the perspectives of those really doing the work.

The details on learning loops, below, come from Kanter’s work, but this is my conceptualization of how the idea applies to advocacy evaluation, and how it differs from “traditional” evaluation.

  • Learning loops emphasize planning for evaluation from the beginning, involving stakeholders in defining success and choosing measures, rather than tacking an evaluation study on at the end.
  • Learning loops provide real-time information, so that it can be applied to change course mid-stream. Organizations take a few hours every month to ask themselves questions about what’s working and what’s not, and they adjust workplans and even strategic goals to account for what they’re learning.
  • Practitioners collect the data that feed the learning loops, and they help to interpret them. They measure engagement (who’s connecting with our work, and what are they saying about that connection?), return on investment (the traction that they’re getting from specific tactics, and which ones deserve more attention), and social change (what is actually getting better about the problems that concern us).
  • Participants engage in a process of reflection as a part of the learning loop; the priority is on really learning something from the evaluation endeavor, and there’s a recognition that we learn best when we have a chance to process with others.
  • Learning loops use low-cost, relatively low-risk experiments, to test assumptions and begin the process of organizational change, as a prelude to lasting social change, rather than waiting until the end of an expensive and lengthy activity to see if it worked.

    There is still a lot that’s hard about evaluating advocacy, and there are still a lot of variables that impinge on our ability to measure precisely the impact of our interventions.

    Still, this kind of advocacy evaluation, woven seamlessly into the practice of advocacy itself, holds tremendous promise for overcoming our collective resistant to the idea and, therefore, beginning to build a body of knowledge that will help us get better at doing advocacy evaluation.

    And it starts with changing how we think about evaluation, not as a hoop through which some funder says we must jump, but instead as a part of the process of social change, and one that gives us another tool through which to improve our work.

    If you’ve been a participant in either approach to evaluation, especially evaluating advocacy or social media efforts, how were those experiences? How might you implement learning loops in your organization, specifically in your advocacy? How does this change how you think about evaluation?

    Advertisements
  • Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out /  Change )

    Google photo

    You are commenting using your Google account. Log Out /  Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )

    Connecting to %s