Tag Archives: advocacy capacity

Practice Reflections: Building Advocacy Capacity

This is the last post in this week’s series sharing some of my reflections from my advocacy consulting practice.

To me, there is a distinction between supporting advocates, as I wrote about earlier this week, and building advocacy capacity.

The former, to me, is about making sure that advocates have the scaffolding that they need, in the heat of a campaign or at critical decision points, to be effective and advance their issues.

The latter requires investing, over the long term, in staff skills and knowledge, in leadership buy-in, and in the confidence with which to make critical choices in the face of adaptive challenges.

This post is about that second piece.

I recently had the opportunity to debrief an advocacy capacity assessment with an organization, the first time that I have been privy to an extended conversation about an organization’s self-assessed capacity and, in particular, what they plan to do to build on their foundation. They used the TCC Group’s Advocacy CCAT and, while the organization shall remain nameless here, there are still some lessons, even absent that specific context, that I think can be instructive for our collective consideration of advocacy capacity-building.

I would love to hear from those who have used this or other advocacy capacity assessments, about your experiences with these tools, or from those who are in the process of advocacy capacity building. And I am so grateful to those who let me observe their work through organizational change, and to those who labor to build their strength, so that they can be better advocates for the causes and the populations they serve. It is an honor, always, to be along for the ride.

Thoughts on Advocacy Capacity-Building:

  • As capacity goes up, the goalposts may move: This particular organization had completed the Advocacy CCAT a few years ago, so this was a sort of post-intervention assessment for them, following a period of advocacy capacity investments. You can imagine their concern, then, when their aggregate scores in some areas were lower than that baseline. As we talked through the indicators they looked at in order to inform their scoring, though, it quickly became clear that at least part of the explanation lies with their increasing sophistication and, then, the higher standards to which they hold themselves. It’s a sort of ‘you don’t know what you don’t know’ phenomenon, and, know that they know, they are harder on themselves than they otherwise would have been.
  • Where capacity is held matters: This particular organization had to grapple with the reality that their actual, usable capacity is not as high as the aggregate ‘scores’ would suggest, since much of their capacity is held rather narrowly within the executive leadership. To have sustainably high capacity, organizations need to diffuse it throughout the organization. Advocacy capacity assessments can only take you part of the way towards this understanding; intentional exploration of the findings, with an eye toward organizational culture and change, is needed to ‘root’ advocacy capacity where it’s needed.
  • Sometimes, the ‘problem’ isn’t your problem: This particular organization also had comparatively low levels of strategic partnerships revealed through their advocacy CCAT. In discussion of this particular finding, we faced honestly the reality that much of this weakness stems from limited field capacity, rather than the organization’s unwillingness or inability to leverage the strength of that field. This can be tricky business, since there’s of course a natural human tendency to want to pin ‘culpability’ for exposed weaknesses on anyone other than ourselves. But, at the same time, failing to account adequately for the environmental constraints that limit an organization’s capacity can lead to frustration, as leaders spin their wheels trying to move the needle on something located beyond their locus of control.
  • Small shifts can help: There is a default, in any organization, to maintain equilibrium, especially when things are going relatively well. Part of the answer to breaking through this resistance to change rests, I think, in breaking off small changes that an organization can pursue that ‘inch’ towards their aspirations. It’s also essential to understand what motivates a given organization to deal with difficult tasks, since any task of organizational change includes some risk and loss.
  • You know your own recommendations: For this organization, and I think for many, while seeing the results of the advocacy CCAT was a very powerful experience, and the way in which the TCC tool aggregated these results was extremely helpful, the recommendations for how to build on their capacity were not that useful. They really knew what they needed to do, and what was realistically on the table, and there were very few examples of when the recommendations pointed them in a direction that was novel.
  • Culture is king: We spent the most time, by far, talking about the organization’s culture and the extent to which it supports advocacy. This includes thinking about how the organization celebrates successes, how people feel comfortable to take risks, how they publicly acknowledge those who contribute to their success, and, so, how they sustain their advocacy efforts through the continual feeding of a pro-advocacy climate. Constructing and nurturing a healthy culture is, of course, an inexact science, which is part of what makes it so important an area of emphasis. I appreciated how the advocacy CCAT pulled it out as a separate layer of analysis, but it was also crucial that we wove it into our discussion about every element of the organization’s advocacy capacity, since it will be difficult to build anywhere without a culture that prioritizes it.

Separately, I have been talking with some folks who are looking at ways to build an infrastructure to support advocacy capacity in nonprofit organizations and civic institutions throughout Kansas. These conversations are still very nascent, but it looks as though it will include investing in technical assistance providers, fostering advocacy among leaders, convening advocates across fields, building policymaker capacity to use advocacy effectively, and conditioning the environment for advocacy (including among philanthropists).

What is most exciting to me about this new direction is what it reflects: an increasing recognition that we have to get upstream a bit, not only with our issue analysis–getting to root causes–but with our advocacy preparation, too. With advocacy capacity building, we’re increasing the likelihood of tomorrow’s success and girding ourselves for the battles we can’t even see yet.

Even while we’re up to our necks in this one these many.

Practice Reflections: Advocacy Evaluation

It occurred to me that I’ve been writing a lot, really over the past few months, about what I’ve been reading and about my work at the university–teaching and supporting policy activities at the Assets and Education Initiative.

But my advocacy consulting work continues, albeit at a somewhat reduced level, and so this week is a sort of ‘from the field’ update, checking in on some of the tremendous work happening in the organizations with which I have the honor and pleasure of working.

Today, some reflections on supporting organizations’ advocacy evaluation, which has been a growing part of my consulting ‘portfolio’, so to speak, over the past two years. There are national organizations and practitioners far more expert than I in the field of advocacy evaluation, publishing regularly and spending most of their professional energies dedicated to advancing this work.

But I hope that some of my insights as a practitioner supporting organizations’ efforts to incorporate advocacy evaluation in a way that is scaled so as to really fit with not just their advocacy capacity, but with the slice of their overall organization occupied by advocacy, may add some value, especially for those in the field.

I have written before about my work in advocacy evaluation, but not in quite a while, so these are sort of my thoughts over the past several months, hopefully adding value to that earlier conversation.

What Works, in Supporting Organizations’ Advocacy Evaluation Capacity:

  • Starting with a dialogue about what they really want to know: I know that this sounds really obvious, but there’s sort of a trick to this. As I discuss below, we cannot begin an evaluation exercise just asking what organizations want to learn about their work, because there can’t be good evaluation without a framework for what we are evaluating. That’s part of the value we have to add as evaluators. But, conversely, starting with the logic model and emphasizing that structured process, without attending to organizations’ sometimes urgent need for more information about their work, is a recipe for disengagement. Getting this right is an art, not a science, but I think it requires acknowledging this tension (see below for more), opening dialogue about the end game, and then continually holding each other accountable for getting back to those sought objectives.
  • Acknowledging their evaluation ‘baggage’: There is unnecessary tension when nonprofits think that evaluators are cramming evaluation down their throats and evaluators think that nonprofits are being cavalier about the enterprise. The truth is that we cannot improve–as individual organizations or as a field–without good evaluation and analysis. But it’s also true that evaluation can be a fraught experience for a lot of organizations, and no one wins when that history and context are ignored, either. That doesn’t mean making a lot of jokes about how horrible logic models are, but it does mean putting on the table everyone’s own background in evaluation–or relative lack thereof–as a dynamic affecting the work.
  • Demystifying ‘analysis’: This, again, sort of falls into the ‘obvious’ category, but sometimes we as consultants/experts/technical assistance providers seek to demonstrate our legitimacy (I think/hope it’s that, instead of our superiority) by enhancing, rather than deflating, the mystique around our work. But no one wins when research or evaluation or analysis (or, fill in the blank: fundraising, organizing, budgeting) is considered difficult or, worse still, mysterious. The biggest breakthroughs I have had in advocacy evaluation with organizations is when they realize how much this is just about putting form and structure to what comes instinctively to them: asking questions about their work and setting out to find the answers.
  • Bridging to funders: There is no more immediately applicable use of the advocacy evaluation enterprise than communication with funders about organizations’ strategies, adaptations, outcomes, and progress. We absolutely cannot engage in advocacy for funders’ sake, but we also cannot expect to be financially sustainable over the long term if we fail to consider funders’ need for information to drive their own decisions. As a consultant, I can broker this relationship, to a certain extent, simultaneously serving both ‘clients’.
  • Investing in process: The how matters, here and always. Sometimes this means bringing people into conversations who wouldn’t necessarily need to be there, because the organization wants to invest in their capacity. Or it means detouring to develop some indicators and measures relevant to a particular funder, because that will enable organizational staff to convince the Board of Directors that this evaluation work is valuable. Or it means going through the process of testing each of the assumptions embedded in an organization’s strategy, because only teasing those apart yourself can really lay them bare. This stuff can’t be rushed, so we have to allow the process to unfold.
  • Starting with sustainability in mind: Every nonprofit organization doing great work right now is, at least, plenty busy. Some are pushed to the breaking point. So it doesn’t matter how well you make the case for the value of advocacy evaluation, or how excited you get the staff about leveraging their knowledge for greater impact, or even how much funders appreciate the information. Unless there is a realistic way for an organization to take on the work of advocacy evaluation, it just won’t get done. To me, this means being willing to scale back an evaluation plan, to help an organization think about what they can glean of value within the resource footprint that they have available. That sometimes means cutting corners on tools or abandoning certain fields of inquiry, but that doesn’t mean failure; it can mean that there’s a real future for evaluation in the organization.

What Doesn’t:

  • Expecting organizations to care as much about evaluation as evaluators, at least at first: This is what we do for a living (well, not me, so much, but you know). It cannot be the advocacy organization’s reason for being, or they wouldn’t be doing advocacy that we could then evaluate. We can’t get our feelings hurt or, worse, assume that organizations aren’t ‘serious’ about building their evaluation capacity, just because it may not be #1 on their to-do list.
  • Prioritizing ‘rooting’ evaluation in the organization, at the expense of added value: So, yes, as I said above, we absolutely need to think about sustainable ways for organizations to assume responsibility for advocacy evaluation within their existing structures. But that shouldn’t mean relegating ourselves to a mere ‘advice-giving’ function, with the expectation that organizations take on all of the work surrounding advocacy evaluation, at their own expense. Sure, it would be great for them to have the experience of constructing their own logic models or designing their own tools. I guess. But, to a certain extent, that’s what I’m here for, and, while I get the whole ‘teach someone to fish’ concept, we get to greatest field capacity by getting over the idea that everyone has to be and do everything, and, instead, figuring out how to make expertise of the collective available to all.
  • Letting their questions totally drive the evaluation: This sounds contradictory, too, to the idea of starting with their questions, but it gets back to that whole question of balance–if organizations already knew exactly what they need to be asking, and how, they probably wouldn’t need my consultation. If I’m going to add value, it should be at least in part through informing their consideration of their options, and influencing their identification of the issues that most warrant investigation. This of course can’t mean driving the agenda, but neither are we in the ‘agency-pleasing’ business. My ultimate responsibility is to the cause, and to those affected by it, and sometimes that has to mean some pushing back.
  • Assuming evaluation is a technical challenge only: Organizations sometimes have real reasons to not want to embark on a particular evaluation project: maybe they are afraid of what the results will show, or maybe they worry about who will want access to their findings, or maybe they fear the effect on staff morale if strategies are exposed as less than effective. None of these are reasons not to evaluate, of course, but we can’t start from the assumption that it’s only lack of knowledge or skill that is holding organizations back from evaluation, when it may very well be will.

If you are engaged in advocacy evaluation or have worked with or as an evaluation consultant, what’s your ‘view from the field’? What do you wish consultants knew about engaging organizations in advocacy evaluation capacity?

It’s here! Report from Advocacy Capacity Tool Users

I’ve never camped out for a new record release, or a new iPhone, or, well, anything.

I’m not really much for sleeping under the stars.

But I have been eagerly awaiting the release of some aggregate data about the organizations that have taken the Bolder Advocacy (Alliance for Justice) Advocacy Capacity tool, especially since these data were one of the major impetuses for moving the ACT to a free format. It’s sort of a trade, really; in exchange for access to the assessment at no charge, organizations agree to let AFJ learn from their responses (anonymously).

Since organizations can then compare themselves to other organizations within their sectors, or of their same sizes, I think examining these results can spark some critical conversations within nonprofit Board rooms.

But I’m even more interested in what looking at these findings can do for grantmakers, capacity builders, and others interested in catalyzing advocacy fields. And that’s how AFJ has framed this first analysis of the initial Advocacy Capacity Tool users: what do organizations need, to move forward?

The Executive Summary is only five pages long and well worth your time, but in the interest of even speedier access, here are the most important pieces, from where I sit (as one training future professionals and providing technical assistance today):

  • Yes, organizations want more advocacy funding, but better planning is perceived as even more important, to advance their advocacy. I do quite a bit of campaign planning with advocating organizations, and I definitely see this need, too. To me, it also relates to their adaptive capacity, because it’s hard to quickly pivot your strategies–and, so, to develop better plans–without having engaged in an intentional reflective process from the beginning.
  • The areas that nonprofit organizations most want to improve, in their advocacy, did not necessarily correspond to their weakest areas. AFJ theorizes that this is because organizations are prioritizing the areas that are most important to their advocacy, but I think it could also reflect that adage that it takes capacity to build capacity, so maybe some of the other elements are places where organizations feel overwhelmed, or, possibly, that organizations feel that they have complementary relationships with other sectors/providers that fill these needs, which, for thinking about field capacity, would be a very promising thing.
  • Legislative advocacy is the best developed, an unsurprising finding that, nonetheless, deserves some of our attention, particularly as elected officials around the country evidence considerable resistance to social work policy priorities, emphasizing the importance of using the entire spectrum of tools with which to induce change. At the same time, a large number of organizations indicating that they are not taking the 501(h) election suggests that there may be room to enhance this legislative advocacy, too.

There is so much about which to be excited here–the availability of a strong tool, for free; organizations’ willingness to share their data, including intimate data about governance and funding; AFJ’s commitment to making this information available in a transparent way.

I look forward to future cohorts’ findings and to the ongoing conversation about what we’re learning.

I’m not pulling out the lawn chairs to camp out on the sidewalk yet, but I’m eager.

Advocacy principles and core priorities

Photo credit, Michal Dubrawski, via Flickr, Creative Commons license

Photo credit, Michal Dubrawski, via Flickr, Creative Commons license

One of the first items of business, when I’m working with a new nonprofit organization around advocacy capacity-building, is to talk through their advocacy principles.

In our work, principles come before the development of an advocacy agenda. In some cases, they replace an agenda altogether, providing the general guideposts that organizations need to navigate decisions in an advocacy context, without pretending that we can predict today the circumstances we’ll face tomorrow, or how we’ll make those trade-offs once we get there.

We talk through how the organization’s core values translate into an advocacy context. We discuss their preferences in public policy development. We discuss how having advocacy principles not only helps the organization stay true to its greatest goods in the event of conflict, but also serves as protection against the intrusive interests of others, by providing some parameters about the types of issues the organization does not take on, in addition to those that it does.

In my experience, organizational mission statements are often too broad to serve this purpose; they tend to be statements that absolutely no one could disagree with, but also that fail to really distinguish one organization from another (aren’t we all interested in ‘strengthening families’, really?).

What we need are guides that help us decide between two goods (Do we prioritize money for prevention or for rapid response? Do we emphasize children’s services or community-level interventions?) or, more often in a policy world, two rather poor compromises (Are we going to put more energy into fighting the repeal of the Earned Income Tax Credit or drug-testing in TANF?).

Done correctly, these advocacy principles also help nonprofits to articulate why they have ‘ranked’ particular policy outcomes as they have, which is incredibly important as we endeavor to preserve relationships in the conflictual climate of policymaking.

They are navigational tools, important symbols of organizational culture and decision-making, and guideposts–not prescriptions–for helping leaders maneuver through difficult choices.

I particularly appreciated this description of core priorities, a similar context somewhat removed from the advocacy context, in Decisive: “guardrails that are wide enough to empower but narrow enough to guide” (185).

That’s what we’re aiming for, when we work through the often-laborious process of settling on advocacy principles as the starting point for our advocacy work.

And, like so many other exercises in ‘centering’ ourselves and clarifying our deepest purpose, once we get that right, the rest of our decisions are, while not ‘easy’, at least easier.

Good decision making in social service organizations

Does your nonprofit organizations make good decisions?

Consistently, over time, when it counts, in ways that contribute to impact?

How do you know?

And how do you establish processes that make it more likely that you keep making good decisions, to drive towards your vision of change?

I’m a little obsessed with these questions right now, contemplating what distinguishes nonprofit organizations that thrive–and bring impact along with them–from those that sort of muddle through or coast–failing to make the mark that they could.

I have been thinking about this a lot more since reading Decisive, and I’ve looked at the organizations with which I’ve been working most closely over the past few years, through the lens of that analysis, for patterns and ideas about how to catalyze better decision making.

But I’m also very interested in your experiences and your practices, to drive good decision making.

What works for you, what have you learned, and what are you willing to share?

  • You need good information for good decisions: It sounds obvious, I know, but there are still many organizations without much evaluation capacity, especially in advocacy, and with few channels to systematically collect and, even more importantly, interpret, the information they need. This has to be zoomed in and out, too; you need base rates and big-picture data, but you also need stories and ‘texture’, to complete the picture of what is really going on with your organization and what you really must know. Without intentional methods through which to gather and act on this information, it won’t happen serendipitously.
  • Organizational culture matters: Organizations need a climate where people aren’t afraid to experiment and dissent, if they are to get good decisions over time. Maybe our nonprofits should have ‘failure of the year’ contests, where we celebrate the little failures that, collectively, can inform our futures? Maybe we need to think about how to institutionalize the devil’s advocate roles that must be part of our conversations.
  • Adaptive capacity is essential: We have to scan not just our own processes and histories, but also the landscape, if we are to have a chance to succeed not just in today’s context, but tomorrow’s, too. That means developing listening channels that help us to understand what other organizations are doing, what social indicators are telling us, and what our best predictions suggest is coming.
  • We have to recognize choices when they are present: There’s so much inertia in our lives, and our organizations are no different. To combat this, organizations need to know when to get off auto-pilot, so that we don’t limp through opportunities to make decisive changes. Not acting is an action, as I tell my students every semester–in advocacy and in nonprofit governance–and so we need to recognize when we’re faced with a decision point.

What are your techniques for making good decisions? What guidance would you share with others? What really excellent decisions have you made, especially if they weren’t immediately recognizable as such? What not-so-great decisions have you made, and what led to those?

Supporting Advocates in the Field

We are moving into a new phase with the group of nonprofit health executives whose work I help to support, as they approach a more intentional level of systems and policy advocacy, focusing more on sustaining and surrounding former Fellows, following the one-year intensive commitment.

That means that we have to really think about what it takes to sustain advocacy within the career of a nonprofit executive, and how to root the learning of these Fellows in their organizations, and how a community of advocates (many of whom are engaged in quite disparate issues) can help individual advocates to improve and maintain their efforts.

We know some of the ways that we will approach this:

  • Building towards a field orientation, helping Fellows to think more about their own advocacy capacity and the capacity of their allies, so that they can develop a greater understanding of what their network looks like and how they can leverage others’ assets for collective impact
  • Connecting Fellows to others in their regions, across issues, given the significance of regional power alliances within policymaking in Kansas–this means that those who are working in Northwest Kansas, for example, on anything health-related, would be assisted to develop opportunities for working together, rather than exclusively bringing those working on oral health together, around the state
  • Facilitating a social space, in a contained environment, where Fellows can freely exchange ideas and resources, and, importantly, complain and lament and be supported in those emotions, too
  • Adding new knowledge and skills, because Fellows are clear that their learning around advocacy is not done, and that they will only be able to regularly prioritize the time commitment necessary to come together as Fellows if they can justify to their Boards how they will gain new tools and insights for their work
  • Layering on engagement with others within the Fellows’ organizations, perhaps to include webinars that will be accessible to Board members, volunteers, and front-line staff members, or periodic ‘seminal’ events with invitations extended to key stakeholders

I will be responsible for much of the content creation for the social space, which is why I was thrilled to come across a post from Community Organizer 2.0 about the content/community connection. And I’ll help with facilitating the regional gatherings and doing some of the event planning, too. We have fairly extensive evaluations from Fellows, including some debriefing conversations following the completion of the Fellowship, about the kinds of opportunities they crave to connect and to advance their advocacy.

But, I’m hoping to crowdsource this one, too.

Especially if someone else was going to foot the bill, what types of activities would you prioritize, for your own advocacy development? What kinds of relationships–mentors, ‘kitchen cabinets’, peers, apprentices–do you value most? What kinds of skills and knowledge are most needed? What would you ask for, if you had the chance, as investments to support your advocacy?

What would make the greatest difference, so that you could make the greatest difference?

Backbone Organizations for Advocacy

So I’m kind of taking the easy way out today.

Basically, what I want you to do is read this series of posts about the importance of backbone organizations for collective impact, written from the perspective of the Greater Cincinnati Foundation, on the Stanford Social Innovation Review site.

My only contribution, to add any value to what they have shared, is that we need these organizations–and these roles–so badly in advocacy.

We need organizations, in each of our issues, but, more importantly, across our sectors, wherever anyone is working for justice and health and peace, that articulate an overarching vision and help us craft strategies likely to take us there. We need funders who will encourage us to pursue aligned activities and will become comfortable with a ‘contribution, not attribution’ mindset that provides the freedom for us to meaningfully partner. We need shared measurement and enhanced capacity to assess where we stand, in relation to where we want to be. We need investments in building public will, so that we don’t ever see ‘the public’ as the problem, and so that we’re not asking policymakers to outrun their constituents in order to advance policies that meet our collective needs. We need entities that can build bridges to real money–the kinds of public and private dollars needed to build infrastructure that can support a movement.

Backbone organizations do that. And we need them.

But, too often, they’re labeled ‘overhead’, along with the critically important functions they fulfill. And, for want of relatively small investments in capacity and connectivity, we pay long-term prices in compromised effectiveness.

One of the most fascinating parts of the posts is where the evaluators asked those in the field–those for whom these organizations are being ‘backbones’–what difference these entities make.

In response, actors said things like, [without them] “even more decisions in our community would be made by a small group of folks,” “communities would be simply in survival mode,” “the public wouldn’t have near the understanding of the challenges,” and “there wouldn’t be any coordinated program at all.” As one stakeholder said, “If they weren’t asking the right questions, we wouldn’t be [where we are today.]”

They’re talking about engagement and translating issues for public consumption and prodding towards benchmarks. The difference means clarity of purpose and common direction and greater comfort understanding one’s own niche in the larger field.

Capacitated well and positioned correctly within their networks, backbone organizations make it possible for those in the field to be who and what they need to be–advocates, organizers, mobilizers, fundraisers, analysts–and to do so in a seemingly-natural concert that, in the aggregate, begins to approach collective impact.

This doesn’t mean that all is smooth and, certainly, these posts are honest about ‘learning in public’ and acknowledging some of the ongoing challenges, around data (and how to share them), communications for public will-building, identification of the best policy objectives, and the continuous work to break down silos and help people to play their own roles, towards common goals (which, truly, is way more difficult than the ubiquitous task of ‘getting everyone on the same page’).

One of the insights for me was the discussion about the characteristics of individual leaders within backbone organizations, a reflection of the truth that organizations are, after all, collections of personalities, and that individuals matter a lot, even on a big scale.

Many of these key characteristics are, not surprisingly, related to elements of advocacy capacity, since backbone organizations are, in large part, an investment in the collective capacity of a field. These leaders are visionary, results-oriented, collaborative relationship builders, who are focused but adaptive, charismatic and influential communicators, politic, and humble.

They make more good possible.

And we need them, especially since advocacy is really about how we reach collective impact, using policy as the specific lever for change.

So, please, go read the posts, and I’d love to hear what you think. Do you have backbone organizations on which you can rely in your work? How do they function, compared to the roles outlined here? Do you play a backbone role, to any extent? What would convince external stakeholders–especially those with money–of the importance of entities that can help us harness our tremendous aggregate capacities towards common aims?

And what would the world look like if we did?

Making sense of advocacy capacity assessments

If you haven’t already checked out Alliance for Justice’s new(ish) site, Bolder Advocacy, I’ll wait here while you go do that.

Regular posts about nonprofit advocacy news, interviews and profiles of changemakers in the nonprofit advocacy field (including foundations, community organizers, nonprofit lobbyists), all of their valuable materials on the legalities of nonprofit work in ballot measures, electoral activity, lobbying, and broader social change…

and a revised version of their Advocacy Capacity Assessment, which I have now used in practice with several nonprofit organizations here in Kansas.

It’s certainly not the only good capacity measure out there, and, indeed, there are others that have some features that I really appreciate. There’s a lot to like about AFJ’s, especially this newer version, which has ‘advanced’ options for organizations whose advocacy is a bit more well-developed, and the ability to compare an organization’s assessment against an aggregate, thanks to the free access to their tool and the categorization and clustering their site does behind-the-scenes.

This post is not an evaluation of the evaluation tools, though, but, instead, some thoughts on advocacy capacity, and the assessment thereof, culled from my work in advocacy capacity-building over the past year.

I’d love to hear from anyone who has used AFJ’s tool, or another advocacy capacity measure, about what they found helpful, and not. Similarly, if you’ve embarked on an advocacy capacity-building process, what reflections can you share? Next week, I’ll link to some case studies of organizations with which I worked on an advocacy capacity technical assistance project. Their experiences, I believe, hold a lot of lessons for we capacity-builders, for organizations committed to advancing their own capacity, and for the foundations that make this work possible.

Today, though, some thoughts on baselines–how we know what we need to do–and on using advocacy capacity assessments to measure our progress towards that goal of ‘capacity’, with, perhaps, some thinking about what capacity is, and why it matters so much, anyway.

  • Partners matter: One of the things that I appreciate most about the new version of the AFJ assessment is that it includes an option for “relying on partners”, when asking organizations about their abilities in specific areas. This isn’t a liability, but, instead, reflects a sophisticated understanding of the capacities of partners and how to leverage them to complement organizations’ own strengths. We’ll only get truly strong fields when we stop leading organizations to believe that they need to possess all of what they need for advocacy success themselves. We need a field lens, and this type of capacity assessment–asking organizations to think about how they rely on others and how they can build on those alliances–takes steps in that direction.
  • Measuring adaptive capacity is tough: The AFJ capacity assessment has a few different questions designed to get at the concept of adaptive capacity–how well organizations can read their environments and adjust their strategies accordingly. This is laudable, but it’s still somewhat elusive, I think. When I talk with organizations, adaptive capacity is their goal, but it is somewhat hard to grasp, both because getting that ‘read’ on the environment can be difficult, and because few advocates have structures that are adequate to facilitate quick responses to changes in that context, even when they know that should be their aim.
  • The how matters: I have used advocacy capacity assessments with organizations where only one individual completes the assessment, and where multiple actors complete it. In my experience, that process makes a difference, in terms of how capacity assessment can serve to catalyze thinking, within an organization, about where you stand and where you want to go. I know that it’s not easy to get Board members and other key stakeholders to sit down and fill out an assessment that takes 30-45 minutes. But, really, if we can’t get that much buy-in around questions of how to position our organizations for advocacy, how can we get buy-in to take the steps that move us to where we want to be, in terms of advocacy?
  • Numbers don’t matter, much: When I’ve had organizations complete the Advocacy Capacity assessment, there’s a strong temptation to focus on the ‘score’. How many points did we get? How does that compare to others? And, I get that. It’s not that the numbers don’t matter, of course; it can be really helpful to have a sense of where we stand, within our sectors, and, especially, of where we’ve come. But, as I’ve said before, organizations can have very highly developed capacity and still not be deploying it strategically. Conversely, there are organizations that can be limping along, without some of the key investments we consider crucial, but still accumulating advocacy successes. Maybe not sustainably, but still. The important point is that the numbers are relative, and that the scores don’t mean as much as the analysis of how different elements of capacity build on each other, how organizations can invest in their capacities, and how to make sure that capacity translates into real advocacy ability and will.

What have you learned from participating in capacity assessments? What is your reaction to this tool? What do you wish existed, in terms of advocacy capacity measures? And how do you use these tools to spark conversations and build momentum, for advocacy, within your organizations?