KPIs and Advocacy Evaluation

I have been corresponding some with the advocacy evaluation folks at the Alliance for Justice, as they make some changes to their advocacy capacity evaluation tool.

In particular, we’ve been talking about how to help organizations put it all together–tie all of the components of advocacy capacity together, to leverage them for greater success–while also continually assessing where and how to develop additional capacity, in order to reach the organization’s full potential. I was thinking about those conversations, and our quest for measurement indicators that would help organizations gauge the extent to which they’re achieving that ideal, when I read Beth Kanter’s post about key performance indicators. She makes the analogy that measurement is like hooking up a big TV–you have to be able to look at each component individually, and then also figure out how they fit together. I have, perhaps rather obviously, never hooked up a television of any size, but I think I can still get the visual there.

And, certainly, I can see how having clear indicators–data points that serve as critical benchmarks by which to assess our work–can make clear a process of evaluation that would otherwise be quite overwhelming and murky.

But, I guess like hooking up the TV, I think it’s that ‘pulling it all together’ part that is the trickiest. I mean, not that identifying those Key Performance Indicators for the elements of advocacy capacity is necessarily a straightforward exercise itself, but, as AFJ and I are discussing regarding advocacy capacity evaluation, there is something nearly unquantifiable, sometimes, about taking those individual parts and putting them to work. That, to me, is the parallel between TV assembly and evaluation, perhaps especially in advocacy.

Here are some of the very tangible ways in which this becomes manifest in advocacy evaluation. What are your key performance indicators, for your advocacy? And how would you quantify that elusive ‘putting it all together’ element?

  • Ability to know which capacities to use when and how to coordinate them, for example, when it’s good to bring the field in and when it’s better to keep the issue quiet. This is the ability to translate capacity–which is really just a latent strength, until activated–into effective advocacy. It is also the will, within the organization, to commit capacity to effective advocacy, instead of reluctance to ‘expend’ resources for social change.
  • Judgment, about how to make decisions and how to wield the organization’s capacity within a dynamic environment. This relates to adaptive capacity, and the ability to respond to changing conditions, but it also defines how organizations can get the most impact from their advocacy capacity. It means deciding when to engage in which advocacy arenas, and what are appropriate goals, but it’s one of those things that is best observed in its absence, which can make the measurement hard. One of the judgments that nonprofit advocates have to make is how to learn lessons from previous advocacy encounters that can help to inform future negotiations, and future strategy planning, and then how to incorporate that learning into the next round of decisions. Advocacy evaluation, including the isolation of key performance indicators, clearly augments this capacity, in a virtuous cycle, but only if there’s intentional reflection around evaluation after action.
  • Ability to analyze how different elements of capacity build upon each other. This means, in the concept of advocacy evaluation, knowing that strengths in one area of capacity can improve functioning in another. You can leverage your coalition partners for indirect relationships to policymakers, and you can approach policy analysis in such a way as to strengthen your ability to take on judicial advocacy, for example. We sometimes see these as disparate elements, even dividing them up within advocating nonprofit organizations, which can limit organizations’ ability to get the greatest ‘synergy’ from their efforts.
  • Ability to leverage the overall capacity of the field, for greatest momentum. Advocates need to be able to figure out where the field’s gaps in advocacy capacity are, where there is an excess, perhaps, of a particular elements of capacity, and how organizations need to complement each others’ capacity, for maximum impact. So, if one organization isn’t very well-developed in media advocacy, for example, but there are others in the field with these relationships and skills in abundance, it may not be a failing of that organization not to have extensive capacity there, if there is good coordination with partners in the field, such that those particular areas are well ‘covered’ by someone.

We’re not at the point, yet, of having crystallized key performance indicators with which to measure these elements, but that’s where we need to be, ultimately. Without the understanding of how to tie elements of capacity together, we run the risk of having a television with lots of connected wires and, yet, still, not the clearest picture.

Connections matter, which no one understands like an advocating social worker.

Advertisements

5 responses to “KPIs and Advocacy Evaluation

  1. Sometimes when I am reading your editorials, I feel like I am reading a technical manual and I get lost in all of the communication. But if I am reading this correctly, you are trying to teach us that we all work together even though we may not recognize it. Like a parent advocates for their child’s education with the school, and the school is advocating for better academic lessons, and the Board of Education is advocating for the best possible outcomes academically. We are all trying to achieve the same results but in different capacities. I hope I am relating this right.
    I work in the Mental Health arena and when we are all working towards the same goal, we get better results. In the political arena, I think those who are pushing the legislation are trying to make things better for those that this legislation will serve. Those who are advocating for the services needed want the same thing but go about it in different ways. Like in nonprofits, those working for these agencies want more funding to be able to provide adequate services for their clients but never really spend time learning how to go about making this happen. Those advocating understand what is needed but don’t really always communicate with those in the “field” even though they can’t exist without each other. If these two entities could collaborate, then there might be better opportunities to compliment each other’s works and open up more possibilities to work together for common causes.
    So the advocacy capacity evaluation is almost like a mediator to find what is working and what is lacking and then how to help them become complimentary in order to maximize the results for possitive changes. Is this correct?

  2. Knowing how and when to advocate is essential to effective advocacy. One key component is, working on relationships with other agencies and colleagues to collaborate on the type of advocacy is needed and testing the waters to see if it is the right time. I particularly like your comment “This is the ability to translate capacity–which is really just a latent strength, until activated–into effective advocacy.” In this type of advocacy you are already aware of the issue and what needs to be done, however, you are waiting for the right time to put it into action “until activated.” Often I find myself in the “latent strength” mode, but making time for the action part is my personal barrier to advocacy in action. On the other hand, we must discern if we have the time to commit to advocacy, we have different seasons in our life, for example, i am pursing my masters and working full time. When I am able to advocate i will use this evaluation as a useful tool for self-evaluation and organizational advocacy performance.

  3. ‘Putting it all together’ to effectively practice advocacy seems to be a balance of latent strengths, efficacy, and the will/ gumption of the agency to act. Time has indicated to me that it is a practiced art to know how and when to use one’s ( or agency’s) adaptive capacity to adjust to new or morphed needs within the community. The agencies that have survived for a significant length of time are able to pick their battles in the advocacy arena to achieve lasting, positive change in the community but to also reign in the urge to be spread too thin across multiple initiatives.
    Without necessarily endorsing the agencies, what are some local Kansas City agencies that have stood the test of time and balance in your opinion? The Family Conservancy of KCK has been serving the area under a variety of names since 1880 which seems to indicate a fair amount of balance (or a ton of funding) is present. I have heard multiple times of Kaw Valley Center (KVC), and they have been around since 1970. Could you suggest a couple local or national examples of organizations that perform well with respect to common key performance indicators?
    I appreciated the hooking up the TV visual!

  4. Good question, Jacob. Honestly, I think that sometimes longevity is not a good indicator; some organizations have survived by avoiding controversy and/or cultivating a sense of dependency on their services…not signals of effectiveness. Instead, I think you want to look at an organization’s vision and how their services and alliances are taking them toward that. How are their community and issues different, because they exist? What are they accomplishing that leaves an imprint on their sphere of influence, as they define it? That clearly suggests a need for baseline good management practices, yes, but, just as clearly, those indicators of organizational survival are inadequate gauges of the organization’s real efficacy. I think that Harvesters is a strong example of nonprofit governance and effectiveness in the region. In the advocacy arena, Kansas Action for Children is similarly evidencing strong results. reStart is an organization evidencing a lot of adaptive capacity, even radically shifting its delivery model to accommodate new understandings of best practices in their field. Does that answer your question?

  5. I think I understand, forgive me if I don’t but what I took from this blog is that indicators come from different advocacy components used that can strengthen nonprofits… these are not indicators of how effective the organizational structure is but reflection and evaluation of what progression in advocacy could look like?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s