Tag Archives: advocacy evaluation

Practice Reflections: Advocacy Evaluation

It occurred to me that I’ve been writing a lot, really over the past few months, about what I’ve been reading and about my work at the university–teaching and supporting policy activities at the Assets and Education Initiative.

But my advocacy consulting work continues, albeit at a somewhat reduced level, and so this week is a sort of ‘from the field’ update, checking in on some of the tremendous work happening in the organizations with which I have the honor and pleasure of working.

Today, some reflections on supporting organizations’ advocacy evaluation, which has been a growing part of my consulting ‘portfolio’, so to speak, over the past two years. There are national organizations and practitioners far more expert than I in the field of advocacy evaluation, publishing regularly and spending most of their professional energies dedicated to advancing this work.

But I hope that some of my insights as a practitioner supporting organizations’ efforts to incorporate advocacy evaluation in a way that is scaled so as to really fit with not just their advocacy capacity, but with the slice of their overall organization occupied by advocacy, may add some value, especially for those in the field.

I have written before about my work in advocacy evaluation, but not in quite a while, so these are sort of my thoughts over the past several months, hopefully adding value to that earlier conversation.

What Works, in Supporting Organizations’ Advocacy Evaluation Capacity:

  • Starting with a dialogue about what they really want to know: I know that this sounds really obvious, but there’s sort of a trick to this. As I discuss below, we cannot begin an evaluation exercise just asking what organizations want to learn about their work, because there can’t be good evaluation without a framework for what we are evaluating. That’s part of the value we have to add as evaluators. But, conversely, starting with the logic model and emphasizing that structured process, without attending to organizations’ sometimes urgent need for more information about their work, is a recipe for disengagement. Getting this right is an art, not a science, but I think it requires acknowledging this tension (see below for more), opening dialogue about the end game, and then continually holding each other accountable for getting back to those sought objectives.
  • Acknowledging their evaluation ‘baggage’: There is unnecessary tension when nonprofits think that evaluators are cramming evaluation down their throats and evaluators think that nonprofits are being cavalier about the enterprise. The truth is that we cannot improve–as individual organizations or as a field–without good evaluation and analysis. But it’s also true that evaluation can be a fraught experience for a lot of organizations, and no one wins when that history and context are ignored, either. That doesn’t mean making a lot of jokes about how horrible logic models are, but it does mean putting on the table everyone’s own background in evaluation–or relative lack thereof–as a dynamic affecting the work.
  • Demystifying ‘analysis’: This, again, sort of falls into the ‘obvious’ category, but sometimes we as consultants/experts/technical assistance providers seek to demonstrate our legitimacy (I think/hope it’s that, instead of our superiority) by enhancing, rather than deflating, the mystique around our work. But no one wins when research or evaluation or analysis (or, fill in the blank: fundraising, organizing, budgeting) is considered difficult or, worse still, mysterious. The biggest breakthroughs I have had in advocacy evaluation with organizations is when they realize how much this is just about putting form and structure to what comes instinctively to them: asking questions about their work and setting out to find the answers.
  • Bridging to funders: There is no more immediately applicable use of the advocacy evaluation enterprise than communication with funders about organizations’ strategies, adaptations, outcomes, and progress. We absolutely cannot engage in advocacy for funders’ sake, but we also cannot expect to be financially sustainable over the long term if we fail to consider funders’ need for information to drive their own decisions. As a consultant, I can broker this relationship, to a certain extent, simultaneously serving both ‘clients’.
  • Investing in process: The how matters, here and always. Sometimes this means bringing people into conversations who wouldn’t necessarily need to be there, because the organization wants to invest in their capacity. Or it means detouring to develop some indicators and measures relevant to a particular funder, because that will enable organizational staff to convince the Board of Directors that this evaluation work is valuable. Or it means going through the process of testing each of the assumptions embedded in an organization’s strategy, because only teasing those apart yourself can really lay them bare. This stuff can’t be rushed, so we have to allow the process to unfold.
  • Starting with sustainability in mind: Every nonprofit organization doing great work right now is, at least, plenty busy. Some are pushed to the breaking point. So it doesn’t matter how well you make the case for the value of advocacy evaluation, or how excited you get the staff about leveraging their knowledge for greater impact, or even how much funders appreciate the information. Unless there is a realistic way for an organization to take on the work of advocacy evaluation, it just won’t get done. To me, this means being willing to scale back an evaluation plan, to help an organization think about what they can glean of value within the resource footprint that they have available. That sometimes means cutting corners on tools or abandoning certain fields of inquiry, but that doesn’t mean failure; it can mean that there’s a real future for evaluation in the organization.

What Doesn’t:

  • Expecting organizations to care as much about evaluation as evaluators, at least at first: This is what we do for a living (well, not me, so much, but you know). It cannot be the advocacy organization’s reason for being, or they wouldn’t be doing advocacy that we could then evaluate. We can’t get our feelings hurt or, worse, assume that organizations aren’t ‘serious’ about building their evaluation capacity, just because it may not be #1 on their to-do list.
  • Prioritizing ‘rooting’ evaluation in the organization, at the expense of added value: So, yes, as I said above, we absolutely need to think about sustainable ways for organizations to assume responsibility for advocacy evaluation within their existing structures. But that shouldn’t mean relegating ourselves to a mere ‘advice-giving’ function, with the expectation that organizations take on all of the work surrounding advocacy evaluation, at their own expense. Sure, it would be great for them to have the experience of constructing their own logic models or designing their own tools. I guess. But, to a certain extent, that’s what I’m here for, and, while I get the whole ‘teach someone to fish’ concept, we get to greatest field capacity by getting over the idea that everyone has to be and do everything, and, instead, figuring out how to make expertise of the collective available to all.
  • Letting their questions totally drive the evaluation: This sounds contradictory, too, to the idea of starting with their questions, but it gets back to that whole question of balance–if organizations already knew exactly what they need to be asking, and how, they probably wouldn’t need my consultation. If I’m going to add value, it should be at least in part through informing their consideration of their options, and influencing their identification of the issues that most warrant investigation. This of course can’t mean driving the agenda, but neither are we in the ‘agency-pleasing’ business. My ultimate responsibility is to the cause, and to those affected by it, and sometimes that has to mean some pushing back.
  • Assuming evaluation is a technical challenge only: Organizations sometimes have real reasons to not want to embark on a particular evaluation project: maybe they are afraid of what the results will show, or maybe they worry about who will want access to their findings, or maybe they fear the effect on staff morale if strategies are exposed as less than effective. None of these are reasons not to evaluate, of course, but we can’t start from the assumption that it’s only lack of knowledge or skill that is holding organizations back from evaluation, when it may very well be will.

If you are engaged in advocacy evaluation or have worked with or as an evaluation consultant, what’s your ‘view from the field’? What do you wish consultants knew about engaging organizations in advocacy evaluation capacity?

Advertisements

Evaluation Capacity that Sticks

In honor of Labor Day, and with some grieving for the end of my summer, I’m fully embracing the contributions of others this week.

It takes a village to come up with these blog posts, I guess?

One of my projects this year is an advocacy evaluation capacity-building initiative, in partnership with TCC Group.

I have been really excited to get to work alongside their consultants–having spent a fair amount of time in TCC webinars, to co-present on advocacy evaluation with them is a real gift.

Recently, TCC distributed an article about some of their learning, from this project and others, about how to build evaluation capacity that truly transforms organizational practices, adding net capacity that transcends the period of intense consultant engagement.

It’s something we’ve been talking about a lot in the Kansas context, too: how do we ensure that we’re not just swooping in to do some evaluation with and for these organizations but, instead, helping them to build knowledge and integrate structures that will enable them to take on advocacy evaluation in a sustained and effective way?

A few points from the article and from my engagement with this project, that resonate more broadly, I think, in the consulting and capacity-building fields in general:

  • Organizations have a lot to learn from each other: The organizations in the cohort with which I’m working clamor for more time with each other. Consultants don’t have a lock on knowledge, and not all capacity-building happens within the confines of the consultant-grantee relationship.
  • Learning needs immediate application: One of the challenges with our Kansas project is that it started in the fall which meant that, by the time that organizations had outlined their evaluation questions and begun to select instruments, it was the legislative session and they had no time to implement their ideas. Learning not applied can atrophy quickly, and we’re considering how to restructure the calendar for future cycles with this in mind.
  • We need to acknowledge the resource/capacity link: Of course it’s easy to say that the way we build capacity is to add dollars. Of course. And there’s obviously not a 1:1 relationship between, in this example, evaluation capacity and organizational budgets. But it’s also true that we can learn everything there is to know and still be crippled, in significant ways, by scarce resources, which means that true, sustainable capacity building in any area of organizational functioning has to also take into account how we build organizational capacity. Period.

I believe in the process of helping nonprofit leaders ask good questions about what they’re doing, the impact that it’s having, and what they need to change.

And I want to ensure that they are positioned to keep asking those questions after I move on.

To make a real difference, it has to stick.

Learning together, for advocacy

One of Beth Kanter’s posts on measurement within nonprofit organizations addressed the “data disconnect” between organizations and their funders.

She cites research that more than half of nonprofit leaders say that funders prioritize their data needs over nonprofits’ needs for information about their own work, an obviously concerning indicator, given what we know about the importance of data to inform good decision-making by nonprofits, in search of collective impact.

There are two key points from the post that I have been mulling over, especially as the Advocacy Evaluation Collaborative of which I am a part enters its second year.

First, it’s clear that nonprofit organizations want to use data more fully and more systematically to guide their work. Nonprofit leaders not only assert that, they are also dedicating some resources towards that end, which is probably even more clear proof that they’re serious. There are some real constraints, here, though, particularly the lack of financial resources, within most grants, specifically dedicated to evaluation. We see this in the policy world, too; there’s an assumption that, somehow, evaluation just ‘gets done’, when, in truth, there are often significant costs, in order to do it well. There is also some confusion about what, precisely, should be measured, but, to me, this isn’t as much a problem in the evaluation arena as in the context of pursuing impact itself. Because, really, once we’re clear about the changes we want/expect/hope to see come from a particular change strategy, it’s obvious what we’re going to measure: did those changes, in fact, happen? So, then, to the extent to which there is lack of clarity or even disagreement between organizations and funders about what should be measured, I think that reflects a larger chasm around what is supposed to be pursued.

Second, there is a risk that, as data are emphasized as part of the change process, there will be data collection for its own sake, with short shrift given to the analysis and utilization of data. And that’s a real problem, since, really, getting some data is not nearly as important as the ‘sense-making’ process–figuring out what the data are saying, and what to do about it, and what it all means. Especially when there are inadequate resources dedicated to evaluation, though, something will get squeezed and, if evaluation is conducted primarily in order to satisfy funders that it is, in fact, happening, then being able to produce data may be valued over really learning from the questions asked.

As I think back on this first year of working pretty closely with both advocacy evaluations and health funders in Kansas around advocacy evaluation, I’m relatively encouraged. There have been times when the process has seemed laborious, and I have felt particular kinship with the organizations, who have often struggled to make time to dedicate to the evaluation learning, in the midst of an especially tough policy climate in our state.

But I think we’re mostly getting these two critical pieces right, which is where I’m hopeful. There has been a lot of talk between funders and organizations about how to decide what to measure, and about the importance of agreeing, first, on a theory of change that is worthy of evaluation, and then letting the questions flow from that understanding of the impact that is supposed to occur. And the data collection piece is actually a fairly minor chunk of the overall workflow, with much more time spent on articulating those theories of change, analyzing data together, and finding ways to incorporate evaluation rather seamlessly into organizations’ flow of operations, in order to increase the likelihood that they do something with what they learn.

It’s that emphasis, I guess, which has made the difference: on learning, collaboratively, and on evaluation as a tool with which to move the work forward, rather than a hoop to jump through.

I don’t know how you take it to scale, really, this side-by-side process of those with money and those doing the work sitting down together to talk about what they need to learn and how to learn it. This process has only involved a few health funders in one state and six advocacy organizations, and it has still been pretty expensive and time-consuming. But maybe, through peer sharing, nonprofit organizations will come to demand this kind of transparency and collegiality. And foundations will come to expect that they can be part of the learning too. And, together, we can harness the power of inquiry to get better at what we are, together, trying to do.

Change the world.

KPIs and Advocacy Evaluation

I have been corresponding some with the advocacy evaluation folks at the Alliance for Justice, as they make some changes to their advocacy capacity evaluation tool.

In particular, we’ve been talking about how to help organizations put it all together–tie all of the components of advocacy capacity together, to leverage them for greater success–while also continually assessing where and how to develop additional capacity, in order to reach the organization’s full potential. I was thinking about those conversations, and our quest for measurement indicators that would help organizations gauge the extent to which they’re achieving that ideal, when I read Beth Kanter’s post about key performance indicators. She makes the analogy that measurement is like hooking up a big TV–you have to be able to look at each component individually, and then also figure out how they fit together. I have, perhaps rather obviously, never hooked up a television of any size, but I think I can still get the visual there.

And, certainly, I can see how having clear indicators–data points that serve as critical benchmarks by which to assess our work–can make clear a process of evaluation that would otherwise be quite overwhelming and murky.

But, I guess like hooking up the TV, I think it’s that ‘pulling it all together’ part that is the trickiest. I mean, not that identifying those Key Performance Indicators for the elements of advocacy capacity is necessarily a straightforward exercise itself, but, as AFJ and I are discussing regarding advocacy capacity evaluation, there is something nearly unquantifiable, sometimes, about taking those individual parts and putting them to work. That, to me, is the parallel between TV assembly and evaluation, perhaps especially in advocacy.

Here are some of the very tangible ways in which this becomes manifest in advocacy evaluation. What are your key performance indicators, for your advocacy? And how would you quantify that elusive ‘putting it all together’ element?

  • Ability to know which capacities to use when and how to coordinate them, for example, when it’s good to bring the field in and when it’s better to keep the issue quiet. This is the ability to translate capacity–which is really just a latent strength, until activated–into effective advocacy. It is also the will, within the organization, to commit capacity to effective advocacy, instead of reluctance to ‘expend’ resources for social change.
  • Judgment, about how to make decisions and how to wield the organization’s capacity within a dynamic environment. This relates to adaptive capacity, and the ability to respond to changing conditions, but it also defines how organizations can get the most impact from their advocacy capacity. It means deciding when to engage in which advocacy arenas, and what are appropriate goals, but it’s one of those things that is best observed in its absence, which can make the measurement hard. One of the judgments that nonprofit advocates have to make is how to learn lessons from previous advocacy encounters that can help to inform future negotiations, and future strategy planning, and then how to incorporate that learning into the next round of decisions. Advocacy evaluation, including the isolation of key performance indicators, clearly augments this capacity, in a virtuous cycle, but only if there’s intentional reflection around evaluation after action.
  • Ability to analyze how different elements of capacity build upon each other. This means, in the concept of advocacy evaluation, knowing that strengths in one area of capacity can improve functioning in another. You can leverage your coalition partners for indirect relationships to policymakers, and you can approach policy analysis in such a way as to strengthen your ability to take on judicial advocacy, for example. We sometimes see these as disparate elements, even dividing them up within advocating nonprofit organizations, which can limit organizations’ ability to get the greatest ‘synergy’ from their efforts.
  • Ability to leverage the overall capacity of the field, for greatest momentum. Advocates need to be able to figure out where the field’s gaps in advocacy capacity are, where there is an excess, perhaps, of a particular elements of capacity, and how organizations need to complement each others’ capacity, for maximum impact. So, if one organization isn’t very well-developed in media advocacy, for example, but there are others in the field with these relationships and skills in abundance, it may not be a failing of that organization not to have extensive capacity there, if there is good coordination with partners in the field, such that those particular areas are well ‘covered’ by someone.

We’re not at the point, yet, of having crystallized key performance indicators with which to measure these elements, but that’s where we need to be, ultimately. Without the understanding of how to tie elements of capacity together, we run the risk of having a television with lots of connected wires and, yet, still, not the clearest picture.

Connections matter, which no one understands like an advocating social worker.