In particular, we’ve been talking about how to help organizations put it all together–tie all of the components of advocacy capacity together, to leverage them for greater success–while also continually assessing where and how to develop additional capacity, in order to reach the organization’s full potential. I was thinking about those conversations, and our quest for measurement indicators that would help organizations gauge the extent to which they’re achieving that ideal, when I read Beth Kanter’s post about key performance indicators. She makes the analogy that measurement is like hooking up a big TV–you have to be able to look at each component individually, and then also figure out how they fit together. I have, perhaps rather obviously, never hooked up a television of any size, but I think I can still get the visual there.
And, certainly, I can see how having clear indicators–data points that serve as critical benchmarks by which to assess our work–can make clear a process of evaluation that would otherwise be quite overwhelming and murky.
But, I guess like hooking up the TV, I think it’s that ‘pulling it all together’ part that is the trickiest. I mean, not that identifying those Key Performance Indicators for the elements of advocacy capacity is necessarily a straightforward exercise itself, but, as AFJ and I are discussing regarding advocacy capacity evaluation, there is something nearly unquantifiable, sometimes, about taking those individual parts and putting them to work. That, to me, is the parallel between TV assembly and evaluation, perhaps especially in advocacy.
Here are some of the very tangible ways in which this becomes manifest in advocacy evaluation. What are your key performance indicators, for your advocacy? And how would you quantify that elusive ‘putting it all together’ element?
- Ability to know which capacities to use when and how to coordinate them, for example, when it’s good to bring the field in and when it’s better to keep the issue quiet. This is the ability to translate capacity–which is really just a latent strength, until activated–into effective advocacy. It is also the will, within the organization, to commit capacity to effective advocacy, instead of reluctance to ‘expend’ resources for social change.
- Judgment, about how to make decisions and how to wield the organization’s capacity within a dynamic environment. This relates to adaptive capacity, and the ability to respond to changing conditions, but it also defines how organizations can get the most impact from their advocacy capacity. It means deciding when to engage in which advocacy arenas, and what are appropriate goals, but it’s one of those things that is best observed in its absence, which can make the measurement hard. One of the judgments that nonprofit advocates have to make is how to learn lessons from previous advocacy encounters that can help to inform future negotiations, and future strategy planning, and then how to incorporate that learning into the next round of decisions. Advocacy evaluation, including the isolation of key performance indicators, clearly augments this capacity, in a virtuous cycle, but only if there’s intentional reflection around evaluation after action.
- Ability to analyze how different elements of capacity build upon each other. This means, in the concept of advocacy evaluation, knowing that strengths in one area of capacity can improve functioning in another. You can leverage your coalition partners for indirect relationships to policymakers, and you can approach policy analysis in such a way as to strengthen your ability to take on judicial advocacy, for example. We sometimes see these as disparate elements, even dividing them up within advocating nonprofit organizations, which can limit organizations’ ability to get the greatest ‘synergy’ from their efforts.
- Ability to leverage the overall capacity of the field, for greatest momentum. Advocates need to be able to figure out where the field’s gaps in advocacy capacity are, where there is an excess, perhaps, of a particular elements of capacity, and how organizations need to complement each others’ capacity, for maximum impact. So, if one organization isn’t very well-developed in media advocacy, for example, but there are others in the field with these relationships and skills in abundance, it may not be a failing of that organization not to have extensive capacity there, if there is good coordination with partners in the field, such that those particular areas are well ‘covered’ by someone.
We’re not at the point, yet, of having crystallized key performance indicators with which to measure these elements, but that’s where we need to be, ultimately. Without the understanding of how to tie elements of capacity together, we run the risk of having a television with lots of connected wires and, yet, still, not the clearest picture.
Connections matter, which no one understands like an advocating social worker.