I mentioned the Kansas Advocacy Evaluation Collaborative the other day; one of the super-cool parts about it is that I get to work with the Center for Evaluation Innovation, including the really smart and incredibly fun Tanya Beer.
On top of that, I’m really encouraged by the way that advocacy evaluation–and the reason for building evaluation capacity–is introduced to these health advocates, all of whom are very busy and would be justified in not really jumping on the ‘please do this, too’ train.
It’s about figuring out how to advocate better.
I mean, yes, there are a lot of foundations in the room. So, yes, there’s the expected angst about their reporting requirements and how to explain a given advocacy effort in a way that it can gain foundation support, and how much do you share about strategy…all of those very real constraints that I cannot forget, just because I am lucky enough not to have to worry about them anymore.
But, last month, when I sat down to talk through, with the advocacy organizations in Kansas with whom I’ll be working most closely, to provide technical support, what they would like to assess, I was really, really excited by their responses.
One organization already has a quite sophisticated system for looking at the capacity and engagement of their grassroots allies; they rank them in terms of their commitment to the issues and progress them as the grassroots advocates move along a continuum of leadership. This way, they can figure out not only how to deploy people effectively, but also where to target their efforts for investing in specific people, specific issues, and specific capacities.
The organization expressed, however, that they lack this same capability when it comes to their partner organizations, even though they know that their coalitions, too, vary in terms of their capacity and their authentic connection to the issues. They recognize that the strength of their network relationships affect how they engage on a given issue, and the likelihood of their success.
On some of their issues, they are literally surrounded by fairly well-functioning coalitions, such that, taking a field capacity view, this organization does not need to be well-positioned on ALL of the essential elements of capacity, since there are others who can fill the inevitable gaps.
On other issues, though, they stand almost alone. Or, at least, there are few other voices raising the same nuanced angles on the issues that they do, which means that they can’t count on others to carry that work forward in quite the same way.
This calculus makes them particularly cognizant of the need for field capacity, so they want to look at a partner assessment or network mapping, to get a better sense of who has what, and can do what, on which issues. They may even want to solicit financial support for some of their partners, as part of a collaborative, in order to indirectly strengthen their own capacities. And they need help to know where to focus on partnerships that are particularly promising, but still weak, the same way that they do with individual allies.
The other organization with which I’ll primarily work is still fleshing out what they want to focus on, but the common theme is this:
If advocacy evaluation is going to work, really work, and not just be a series of hoops we jump through to please funders, then we have to do it for us.
We have to see evaluation as an opportunity to ask the questions to which we want answers. We have to construct evaluation methodologies that fit with our practices and our skills. We have to work on timelines that align with our advocacy campaigns. We have to produce results that we can digest, and act upon, and build from.
We have to want it, really, and see that we need it.
Because we do.