Practice Reflections: Advocacy Evaluation

It occurred to me that I’ve been writing a lot, really over the past few months, about what I’ve been reading and about my work at the university–teaching and supporting policy activities at the Assets and Education Initiative.

But my advocacy consulting work continues, albeit at a somewhat reduced level, and so this week is a sort of ‘from the field’ update, checking in on some of the tremendous work happening in the organizations with which I have the honor and pleasure of working.

Today, some reflections on supporting organizations’ advocacy evaluation, which has been a growing part of my consulting ‘portfolio’, so to speak, over the past two years. There are national organizations and practitioners far more expert than I in the field of advocacy evaluation, publishing regularly and spending most of their professional energies dedicated to advancing this work.

But I hope that some of my insights as a practitioner supporting organizations’ efforts to incorporate advocacy evaluation in a way that is scaled so as to really fit with not just their advocacy capacity, but with the slice of their overall organization occupied by advocacy, may add some value, especially for those in the field.

I have written before about my work in advocacy evaluation, but not in quite a while, so these are sort of my thoughts over the past several months, hopefully adding value to that earlier conversation.

What Works, in Supporting Organizations’ Advocacy Evaluation Capacity:

  • Starting with a dialogue about what they really want to know: I know that this sounds really obvious, but there’s sort of a trick to this. As I discuss below, we cannot begin an evaluation exercise just asking what organizations want to learn about their work, because there can’t be good evaluation without a framework for what we are evaluating. That’s part of the value we have to add as evaluators. But, conversely, starting with the logic model and emphasizing that structured process, without attending to organizations’ sometimes urgent need for more information about their work, is a recipe for disengagement. Getting this right is an art, not a science, but I think it requires acknowledging this tension (see below for more), opening dialogue about the end game, and then continually holding each other accountable for getting back to those sought objectives.
  • Acknowledging their evaluation ‘baggage’: There is unnecessary tension when nonprofits think that evaluators are cramming evaluation down their throats and evaluators think that nonprofits are being cavalier about the enterprise. The truth is that we cannot improve–as individual organizations or as a field–without good evaluation and analysis. But it’s also true that evaluation can be a fraught experience for a lot of organizations, and no one wins when that history and context are ignored, either. That doesn’t mean making a lot of jokes about how horrible logic models are, but it does mean putting on the table everyone’s own background in evaluation–or relative lack thereof–as a dynamic affecting the work.
  • Demystifying ‘analysis’: This, again, sort of falls into the ‘obvious’ category, but sometimes we as consultants/experts/technical assistance providers seek to demonstrate our legitimacy (I think/hope it’s that, instead of our superiority) by enhancing, rather than deflating, the mystique around our work. But no one wins when research or evaluation or analysis (or, fill in the blank: fundraising, organizing, budgeting) is considered difficult or, worse still, mysterious. The biggest breakthroughs I have had in advocacy evaluation with organizations is when they realize how much this is just about putting form and structure to what comes instinctively to them: asking questions about their work and setting out to find the answers.
  • Bridging to funders: There is no more immediately applicable use of the advocacy evaluation enterprise than communication with funders about organizations’ strategies, adaptations, outcomes, and progress. We absolutely cannot engage in advocacy for funders’ sake, but we also cannot expect to be financially sustainable over the long term if we fail to consider funders’ need for information to drive their own decisions. As a consultant, I can broker this relationship, to a certain extent, simultaneously serving both ‘clients’.
  • Investing in process: The how matters, here and always. Sometimes this means bringing people into conversations who wouldn’t necessarily need to be there, because the organization wants to invest in their capacity. Or it means detouring to develop some indicators and measures relevant to a particular funder, because that will enable organizational staff to convince the Board of Directors that this evaluation work is valuable. Or it means going through the process of testing each of the assumptions embedded in an organization’s strategy, because only teasing those apart yourself can really lay them bare. This stuff can’t be rushed, so we have to allow the process to unfold.
  • Starting with sustainability in mind: Every nonprofit organization doing great work right now is, at least, plenty busy. Some are pushed to the breaking point. So it doesn’t matter how well you make the case for the value of advocacy evaluation, or how excited you get the staff about leveraging their knowledge for greater impact, or even how much funders appreciate the information. Unless there is a realistic way for an organization to take on the work of advocacy evaluation, it just won’t get done. To me, this means being willing to scale back an evaluation plan, to help an organization think about what they can glean of value within the resource footprint that they have available. That sometimes means cutting corners on tools or abandoning certain fields of inquiry, but that doesn’t mean failure; it can mean that there’s a real future for evaluation in the organization.

What Doesn’t:

  • Expecting organizations to care as much about evaluation as evaluators, at least at first: This is what we do for a living (well, not me, so much, but you know). It cannot be the advocacy organization’s reason for being, or they wouldn’t be doing advocacy that we could then evaluate. We can’t get our feelings hurt or, worse, assume that organizations aren’t ‘serious’ about building their evaluation capacity, just because it may not be #1 on their to-do list.
  • Prioritizing ‘rooting’ evaluation in the organization, at the expense of added value: So, yes, as I said above, we absolutely need to think about sustainable ways for organizations to assume responsibility for advocacy evaluation within their existing structures. But that shouldn’t mean relegating ourselves to a mere ‘advice-giving’ function, with the expectation that organizations take on all of the work surrounding advocacy evaluation, at their own expense. Sure, it would be great for them to have the experience of constructing their own logic models or designing their own tools. I guess. But, to a certain extent, that’s what I’m here for, and, while I get the whole ‘teach someone to fish’ concept, we get to greatest field capacity by getting over the idea that everyone has to be and do everything, and, instead, figuring out how to make expertise of the collective available to all.
  • Letting their questions totally drive the evaluation: This sounds contradictory, too, to the idea of starting with their questions, but it gets back to that whole question of balance–if organizations already knew exactly what they need to be asking, and how, they probably wouldn’t need my consultation. If I’m going to add value, it should be at least in part through informing their consideration of their options, and influencing their identification of the issues that most warrant investigation. This of course can’t mean driving the agenda, but neither are we in the ‘agency-pleasing’ business. My ultimate responsibility is to the cause, and to those affected by it, and sometimes that has to mean some pushing back.
  • Assuming evaluation is a technical challenge only: Organizations sometimes have real reasons to not want to embark on a particular evaluation project: maybe they are afraid of what the results will show, or maybe they worry about who will want access to their findings, or maybe they fear the effect on staff morale if strategies are exposed as less than effective. None of these are reasons not to evaluate, of course, but we can’t start from the assumption that it’s only lack of knowledge or skill that is holding organizations back from evaluation, when it may very well be will.

If you are engaged in advocacy evaluation or have worked with or as an evaluation consultant, what’s your ‘view from the field’? What do you wish consultants knew about engaging organizations in advocacy evaluation capacity?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s