Tag Archives: foundations

Learning together, for advocacy

One of Beth Kanter’s posts on measurement within nonprofit organizations addressed the “data disconnect” between organizations and their funders.

She cites research that more than half of nonprofit leaders say that funders prioritize their data needs over nonprofits’ needs for information about their own work, an obviously concerning indicator, given what we know about the importance of data to inform good decision-making by nonprofits, in search of collective impact.

There are two key points from the post that I have been mulling over, especially as the Advocacy Evaluation Collaborative of which I am a part enters its second year.

First, it’s clear that nonprofit organizations want to use data more fully and more systematically to guide their work. Nonprofit leaders not only assert that, they are also dedicating some resources towards that end, which is probably even more clear proof that they’re serious. There are some real constraints, here, though, particularly the lack of financial resources, within most grants, specifically dedicated to evaluation. We see this in the policy world, too; there’s an assumption that, somehow, evaluation just ‘gets done’, when, in truth, there are often significant costs, in order to do it well. There is also some confusion about what, precisely, should be measured, but, to me, this isn’t as much a problem in the evaluation arena as in the context of pursuing impact itself. Because, really, once we’re clear about the changes we want/expect/hope to see come from a particular change strategy, it’s obvious what we’re going to measure: did those changes, in fact, happen? So, then, to the extent to which there is lack of clarity or even disagreement between organizations and funders about what should be measured, I think that reflects a larger chasm around what is supposed to be pursued.

Second, there is a risk that, as data are emphasized as part of the change process, there will be data collection for its own sake, with short shrift given to the analysis and utilization of data. And that’s a real problem, since, really, getting some data is not nearly as important as the ‘sense-making’ process–figuring out what the data are saying, and what to do about it, and what it all means. Especially when there are inadequate resources dedicated to evaluation, though, something will get squeezed and, if evaluation is conducted primarily in order to satisfy funders that it is, in fact, happening, then being able to produce data may be valued over really learning from the questions asked.

As I think back on this first year of working pretty closely with both advocacy evaluations and health funders in Kansas around advocacy evaluation, I’m relatively encouraged. There have been times when the process has seemed laborious, and I have felt particular kinship with the organizations, who have often struggled to make time to dedicate to the evaluation learning, in the midst of an especially tough policy climate in our state.

But I think we’re mostly getting these two critical pieces right, which is where I’m hopeful. There has been a lot of talk between funders and organizations about how to decide what to measure, and about the importance of agreeing, first, on a theory of change that is worthy of evaluation, and then letting the questions flow from that understanding of the impact that is supposed to occur. And the data collection piece is actually a fairly minor chunk of the overall workflow, with much more time spent on articulating those theories of change, analyzing data together, and finding ways to incorporate evaluation rather seamlessly into organizations’ flow of operations, in order to increase the likelihood that they do something with what they learn.

It’s that emphasis, I guess, which has made the difference: on learning, collaboratively, and on evaluation as a tool with which to move the work forward, rather than a hoop to jump through.

I don’t know how you take it to scale, really, this side-by-side process of those with money and those doing the work sitting down together to talk about what they need to learn and how to learn it. This process has only involved a few health funders in one state and six advocacy organizations, and it has still been pretty expensive and time-consuming. But maybe, through peer sharing, nonprofit organizations will come to demand this kind of transparency and collegiality. And foundations will come to expect that they can be part of the learning too. And, together, we can harness the power of inquiry to get better at what we are, together, trying to do.

Change the world.


Supporting Advocates in the Field

We are moving into a new phase with the group of nonprofit health executives whose work I help to support, as they approach a more intentional level of systems and policy advocacy, focusing more on sustaining and surrounding former Fellows, following the one-year intensive commitment.

That means that we have to really think about what it takes to sustain advocacy within the career of a nonprofit executive, and how to root the learning of these Fellows in their organizations, and how a community of advocates (many of whom are engaged in quite disparate issues) can help individual advocates to improve and maintain their efforts.

We know some of the ways that we will approach this:

  • Building towards a field orientation, helping Fellows to think more about their own advocacy capacity and the capacity of their allies, so that they can develop a greater understanding of what their network looks like and how they can leverage others’ assets for collective impact
  • Connecting Fellows to others in their regions, across issues, given the significance of regional power alliances within policymaking in Kansas–this means that those who are working in Northwest Kansas, for example, on anything health-related, would be assisted to develop opportunities for working together, rather than exclusively bringing those working on oral health together, around the state
  • Facilitating a social space, in a contained environment, where Fellows can freely exchange ideas and resources, and, importantly, complain and lament and be supported in those emotions, too
  • Adding new knowledge and skills, because Fellows are clear that their learning around advocacy is not done, and that they will only be able to regularly prioritize the time commitment necessary to come together as Fellows if they can justify to their Boards how they will gain new tools and insights for their work
  • Layering on engagement with others within the Fellows’ organizations, perhaps to include webinars that will be accessible to Board members, volunteers, and front-line staff members, or periodic ‘seminal’ events with invitations extended to key stakeholders

I will be responsible for much of the content creation for the social space, which is why I was thrilled to come across a post from Community Organizer 2.0 about the content/community connection. And I’ll help with facilitating the regional gatherings and doing some of the event planning, too. We have fairly extensive evaluations from Fellows, including some debriefing conversations following the completion of the Fellowship, about the kinds of opportunities they crave to connect and to advance their advocacy.

But, I’m hoping to crowdsource this one, too.

Especially if someone else was going to foot the bill, what types of activities would you prioritize, for your own advocacy development? What kinds of relationships–mentors, ‘kitchen cabinets’, peers, apprentices–do you value most? What kinds of skills and knowledge are most needed? What would you ask for, if you had the chance, as investments to support your advocacy?

What would make the greatest difference, so that you could make the greatest difference?

When nonprofits are boxed into corners

This is, for now, my last post about the Center for Evaluation Innovation’s framework for public policy.

It is inspired, again, by a conversation with nonprofit advocates–mostly also executives in their organizations–with whom I was talking about some of the challenges that their organizations face in adapting to changing political climates by incorporating new strategies and engaging in new advocacy arenas.

One Executive Director spoke bluntly about the boundaries she confronts, in trying to make these shifts, because of funding sources that constrain her ability to, for example, move from policymaker education to building political will (because that looks like lobbying), or translate policy analysis and research into champion development (by explicitly reaching out to make information resonate with decision makers).

And I know this isn’t the first that I (and others) have written about nonprofit lobbying rules (those leveled by the IRS and those more artificially imposed by foundations/donors and Boards of Directors), but I guess it’s the first time that I’ve thought about them in such clear terms:

Sometimes, these restrictions just compromise our effectiveness and form barriers that make it really, really hard for us to be effective.

It’s like we confront a fence when we get to a certain point in the framework and have to stop before we can get to the impact that we seek.

In my head, I see one of those cartoons where the character hits the invisible glass wall.

Only it’s not funny.

It’s frustrating and kind of disheartening.

I think that there are ways around most of these ties that bind us:

  • Organizations should take the 501(h) election, so that they are held to a clear, dollar-amount cap, instead of the amorphous ‘insubstantial parts test’.
  • Organizations should always, assertively, compellingly educate foundations and other donors, not just about the legality of nonprofit advocacy, but also about its expected outcomes, and why it deserves investment.
  • Organizations should build strong networks and use a ‘field frame’ to determine where they have allies with complementary capacity and, perhaps, not all of the same limitations on lobbying.
  • Organizations should maximize their capacity in the unrestricted areas, knowing that some of that strength will spill over into other parts of the framework.

Still, for me, the epiphany in this conversation was that we can’t always maneuver around these obstacles.

There are organizations whose funding primarily comes from the federal government, and they have very little ability to engage in activity with decision makers, beyond the most ‘neutral’ education. There are organizations with very small budgets, for whom even the 501(h) test gives few resources to dedicate to lobbying. There are organizations in contexts with few funders who are supportive of advocacy of any kind.

And all of that means that it’s harder for us to work a plan, to lay out a logic model that would move us from input A to outcome B in anything like an expected trajectory.

It can mean that we do pretty irrational things, like invest in a lot of community education and expect it to neatly lead to policy change.

It can mean that we feel stuck in a corner.

And, as a child of the 80s, I know that’s not good.

Evaluating Advocacy, de nuevo

It’s “update” week at Classroom to Capitol.

As I read through previous posts for my summer maternity break hiatus, I found a few that I really wanted to revisit, rather than repost. This is the last of the three that I have chosen for this week, with new thoughts and, of course, new questions.

One of my academic interests over the past couple of years has related to questions of how we evaluate advocacy efforts: How do we know advocacy “success”, short of absolute policy change, so that we can build on it? How can we assess organizational capacity for advocacy (to have a better sense of who will succeed, and also to know where to invest)? What kinds of interim goals should form part of an advocacy strategy, and what kinds of benchmark measures should mark our progress?

Over the past year, I’ve had the chance to apply my study and training in this area to practice through work with the Sunflower Foundation and its advocacy initiatives. It’s tremendously rewarding to be able to not only help individual advocates and nonprofit organizations seeking to develop an advocacy voice figure out how they’ll gauge their work, but also to be part of this evolving field and to work alongside a funder investing so much energy in contributing to good practice around these questions, too.

I love it.

More recently, my work with the Sunflower Foundation has allowed me to contribute to some of the Alliance for Justice’s conversations about how they evaluate advocacy, both on the front end (in terms of organizational capacity) and as advocates and their donors seek to determine the relative impact of different advocacy strategies. I’m very excited about AFJ’s revised advocacy capacity tool, which will be available online soon, and particularly about their approach to this work, which is aimed at getting as many organizations as possible to evaluate their own capacity (in a variety of areas; it’s a pretty thorough look at the inputs that we believe position an organization to succeed in advocacy) in order to build the field of knowledge about what makes a difference in ultimate advocacy success.

In Kansas, our hope is to eventually be able to help a given nonprofit organization know where it sits, on some of these capacity measures, compared to an aggregate of its peers, and also to develop strategies that are at least likely to lead to enhanced capacity in those same areas, so that we can build a strong cadre of advocate organizations across the geography and in different fields.

Refining these measures, and these tools, is important not just because we want to know what works in advocacy (so that we can get better and better and win more and more often), but also because being able to demonstrate how our theory of change is leading to tangible results should push more funders to feel comfortable supporting advocacy (or, at least, to expose that their real fears are taking a stand on controversial issues, and we need to know that, too!). We’ve come quite far in the past few years, such that advocates are no longer left to flounder to come up with benchmarks, and no longer grasping for what might make sense for measurement. It’s tremendously exciting, for the academic side of me, but especially for the promise that these tools hold in making our advocacy more robust, more acclaimed, and, ultimately, more integrated into what nonprofit organizations do all day.

And it’s great to be part of it.

If your organization is interested in advocacy evaluation and/or assessing your organizational capacity for advocacy, we should talk! I’d love to connect you to resources and (full disclosure!) include you in some of our field-building efforts, too. Because once we know what works, we just have to gather the courage to go after the money to do it.

And, then, we’re unstoppable.

Yes, they can: Foundations and Movement-Building

These are bleak times for many of us committed to progressive social change and a vision of social justice that includes an end to poverty, full protection of civil rights for citizens and for immigrants, real power for working people, universal health care, and a sustainable environment. The ongoing economic hardship that has plagued our country for all of my twins’ young lives, and a much more constrained understanding of the social contract among policymakers in our state and federal governments, can lead to despair and retrenchment.


We can focus on building long-term movements for social change, the kind that, if we’re being honest with ourselves, are our only hope for bringing about the world as we wish it anyway. What the almost three years since the 2008 elections have taught us, or perhaps reminded us, is that there are no shortcuts, and that we can never, ever, ever stop organizing.

And that’s why, for me, it’s the perfect time for this Foundation Review article outlining how foundations can (and should!) support movement building. It begins with the obvious acknowledgement that philanthropy does not a movement make, and that successful movements must, by definition, be driven by those animating them with their own passions and pains (so foundations have to relinquish control over the ultimate (and even many of the interim) goals, as well as the timeline).

But it analyzes powerful movements from history to define their core elements, and then suggests activities in which foundations can invest in order to infuse social movements with essential resources. My own study of the civil rights movement (I finally accomplished my goal of reading all of Taylor Branch’s trilogy on Dr. Martin Luther King, Jr.) shows the many points when donations, from individuals and from philanthropic and religious institutions, facilitated the next steps that, combined, built one of the greatest movements for social justice our world has known. The article also illustrates the role that foundations can play in very long-term movement building with a brief history of the conservative movement and the foundations that decided in the 1960s to systematically invest in building capacity–investments that began to pay real dividends with the election of Ronald Reagan and, certainly, is very much in play still today.

Bringing these ideas to our progressive work requires some shifting on the part of foundations, to be sure, so that they see themselves as movement strategists, more than as funders, with a commitment to changing the terms of the debate so that, ultimately, the kinds of policies we support are seen as “natural”, because we’ve framed them that way. If progressive foundations are to build the kind of world they seek, they’ll need movements to create it. And those movements will happen much more surely if they can hire the people they need, purchase the media to communicate, and conduct activities in pursuit of their vision.

And that means, yes, multi-year grants and general operating support and transparent, mutual relationships with those receiving investments. It means not expecting grantees to demonstrate their unique “niche”, but encouraging collaboration and even “duplication”, as reflecting convergence of focus and enhanced overall capacity. This report uses the term “advocacy infrastructure” to talk about these long-term investments that cross organizational and issue boundaries.

But putting all of this on foundations is unwise and unfair. Community organizers, direct service practitioners engaged in social change, and all of us who care about building movements need to think beyond single-issue campaigns, too, and develop relationships with philanthropists so that we can help them to see the future through our same vision.

We need to have clear strategies related to each of the components of successful movement building: base-building, research and framing, strategic power assessment, organizational management, engagement and networking, and leadership and vision development. We can’t expect foundations to invest in these activities if we continue to zero in on tactics immediately and populate our grant applications with detailed descriptions of what we’ll do, with little attention to the who, and, most importantly, the why.

One of my favorite parts of this discussion was the inclusion of direct service providers as a key avenue to base building. That thinking builds on foundations’ existing relationships with social service agencies and could leverage those considerable resources for real power building. It’s also significant that their discussion of leadership development transcends the intense “academies” that are fairly popular with foundations (and, absolutely, potentially very impactful), because they have a pretty high initial “cost” of entry, and we need leadership capacity development at all levels of engagement.

Of course, my interest in advocacy evaluation made me hone in on the discussion of outcomes and assessment, especially because it’s very true that our nascent field of policy and advocacy evaluation misses many of the elements of movement building that would need to be included in a more comprehensive evaluation. There’s a table at the end with the stages of movement building, the five core elements, and benchmarks for each that I’ve printed out to refer to for my evaluation practice; it’s only a beginning, but it’s a good place to start. This piece is critical not only because it will add to the field of knowledge about what works and increase our understanding about social movements, but also because speaking philanthropic language about accountability and measures can help us to bridge these gaps.

As the authors say, “Foundations do not make history. They fund it.”

And then I’ll have even more books on my nightstand, to retrace the victories and the roles that activists and the philanthropists who invested in them played in creating the victories that we can’t imagine living without.

Here’s to a brighter future and the movements that will bring it.

We’ve got long-term work to do.

Advocates speak out on advocacy evaluation

photo credit, Michael Lokner, via Flickr

A missing piece in the discussion of advocacy evaluation has been the voices of advocates themselves. Too busy changing the world to be included in the discussion about how we measure those change efforts, the conversation has been happening almost behind their/our backs, and I was really glad to see this report, spearheaded by the Atlantic Philanthropies and Annie E. Casey Foundation, two of the leading philanthropic voices on social change, and, it turns out, evaluation of the same.

The purpose of this report is to provide nonprofit advocates with a platform to discuss their experiences with advocacy evaluation and to open communication with evaluators and donors about how to improve the enterprise. It opens, though, with the results of a survey of more than 200 advocacy grantees of some of the leading foundations in advocacy, and those results are themselves instructive for forming a portrait of the status of nonprofit advocacy.

Not surprisingly, only 25% of respondents have ever evaluated their advocacy. Even fewer of those have had the assistance of an external evaluator (which is significant given the limited experience of many nonprofit types in doing systematic evaluation of any kind)–only 17% of the total sample. Of course, I also question how useful the exchange with the external evaluators has been for advocates; anyone who has participated in an independent evaluation knows that evaluators vary in their willingness to actively engage program leaders in the process and shape a product that will meet the agency’s needs.

Sixty percent of nonprofit advocates are working within organizations with budgets less than $1 million annually; fully half have budgets less than $500,000/year. More than half of respondents, furthermore, dedicate fewer than half of their resources to advocacy, with smaller organizations more likely to be ‘purely’ advocacy. Human services are the most common advocacy priority of the respondents, at 40%. Advocates are mostly engaged in state, local, and regional work; only 21% are substantially working on national advocacy. That’s interesting, I think, not surprising, given the logistical and political challenges of impacting Congress, but rather discouraging given the rich possibilities of effective congressional advocacy.

Advocates are overwhelmingly focused on legislative advocacy (56%). This appears to include a strong grassroots lobbying component, though, with 47% citing participation in community organizing also. Only 12% are working on judicial strategies and only 5% on administrative/regulatory advocacy. That echoes what I often hear from nonprofit leaders when we talk about advocacy; they tend to think legislative work first and foremost and are often surprised and even confused when I talk about other types of engagement as ‘advocacy’. One of the findings that most resonated with me was that, despite the preference for legislative advocacy, only 22% of advocates judged legislative work as the most effective strategy!

Important for me as I continue exploring my consulting work with nonprofit organizations was the statement that research and communications assistance are the capacities that advocates view as most lacking. That surprised me, because I would think that those tools would be easiest to find from other sources, and it has caused me to rethink somewhat what I need to be discussing with nonprofit leaders.

As far as actual advocacy evaluation, those advocates that have done it note that it has helped them to refine their strategies, make the case for more funding, and pursue staffing changes. They point to lack of resources for evaluation, obviously, as a barrier, but also the need for better interim goals and an attitude that sees evaluation as a capacity-building tool rather than a punitive audit.

As the report states, the field of advocacy evaluation was virtually nonexistent not even 10 years ago and is now developing rather dramatically. The authors conclude by calling for advocacy evaluation to help advocates better change the world. In the race towards justice, they say, we need to know when to sprint and when to save our strength, and good advocacy evaluation can help us reach the finish line.

Challenges in Evaluating Advocacy

As I’ve discussed here before, one of the great challenges facing nonprofit organizations trying to integrate organizing and advocacy into their social service work (and, especially trying to get foundation or other outside funding to do that work) is in defining ‘success’ in the advocacy/organizing context and measuring the extent to which agency actions can be credited for that same success.

And this is a problem. It’s a problem because not all advocacy and organizing is very worthwhile, and the really effective work needs to rise to the top, just as in any activity in which nonprofit organizations engage. And it’s a problem because many donors use (in my opinion) the rather nebulous nature of outcome tracking in social change as an excuse not to fund it, which means fewer resources for this really vital work. And it’s a problem because we can’t maximally learn from what others are doing well (and not) if we don’t have common terms, common benchmarks, and a common mechanism for sharing and, then, building on, that collective knowledge.

So I’ve been doing a lot of reading about assessment in advocacy and organizing; I’ve talked with folks at some of the foundations and consulting firms around the country that are most advanced in this, and I’ve reflected on my own experiences as an advocate participating in evaluations. I have found a couple of resources that I think are really worth sharing, and I hope that they, and my reflections shared here, will be helpful to you as you set out to not only do social change work (yay! yay!) but also to do it intentionally well, to be strategic about how to assess it, and to then freely disseminate your results with would-be disciples.

When I was on the strategy committee of the Coalition for Comprehensive Immigration Reform, we participated in a pretty intense evaluation of our organizing and advocacy with Innovation Network. Recently, when I was on their website, I was totally blown away by the depth and breadth of resources that they have available for free. They are outstanding: online tools for setting benchmarks and conducting evaluations, a regular newsletter on evaluating advocacy, literature on the emerging field of evaluation, and more. It’s awesome, and all you have to do is register (for free). Check it out.

I also read through almost 80 pages of a pretty comprehensive report by the California Endowment (it’s good, but I don’t expect anyone else to want to wade through it–I did link to them below, in case you are interested). There are some good resources at the back of each report, though, so you might want to check those out–some online tools (many of which are also linked at innonet) and some literature. Here are my thoughts in reading through it, and thinking a lot over the past several days about this dilemma and how advocates and donors can work through it together.

  • Having a clear (and mutually-agreed-upon) theory of change is absolutely essential–we can’t bank on achieving the actual policy change that might be the ultimate goal, but if we know what needs to take place as interim steps towards that ultimate change, then we can count those accomplishments as outcomes, knowing that they are likely to contribute to our ultimate success. I can’t stress enough how much that kind of clicked for me this week. We need to spell out what needs to happen in terms of garnering support, changing public opinion, influencing the debate, etc…in order for policy change to occur–doing so will not only allow our investors to hold us accountable for those steps along the way but also make our own success much more likely. Some examples given in the report: shifts in critical mass, changing definitions, changing community or individual behavior, influencing institutional policy, holding the line. As we measure how well we’re doing on these goals, we’ll also be gauging how we’re advancing on our goal. Kind of a light bulb moment.
  • To really break through, foundations have to get over their overblown fears of lobbying. Otherwise, nonprofit organizations withhold some of the context of their work in order to make foundations feel safer, and then what they’re talking about is incomplete and sometimes almost nonsensical.
  • At the same time, though, we need an understanding of social change that far surpasses lobbying, or even policy change. We need to think about regulatory advocacy, legal advocacy, media advocacy, and community organizing as essential pieces of this work, just as important as legislative advocacy, depending on the target and campaign. Otherwise, we can confuse policy change (which is really just a means) with broader social change (the real goal). Doing so can lead us to prematurely declare victory or pursue an unnecessarily narrow range of activities.
  • We need capacity building. If an organization is an ‘advocacy organization’, then expecting them to turn on a dime and implement X social change campaign is reasonable. But when we’re talking about social service organizations learning an entirely new way to do their work, we need to invest so that they are then prepared to respond to opportunities (again, not just legislatively, but also in the community environment) as they develop. Along these same lines, we need to educate foundation Boards and Trustees about the long-term nature of social change and the need for investment beyond the 1-3 year term.
  • We can convert the process goals we commonly use in advocacy to the outcome indicators that foundations so want to see. It’s really just a matter of shifting our thinking. Instead of the number of meetings we held: the increase in the percentage exposed to the issue. Instead of the number of press releases: the number of times the organization was quoted. Instead of giving testimony: the organization’s statistics were used in a summary of the hearing.
  • We have to balance realistic and aspirational goals as we set our benchmarks. If we’re not striving, we’re not going to win, but if we are setting ourselves up for failure, we likely won’t be able to sustain the effort necessary for the long-term haul that is building social change.
  • Nonprofit organizations have to push back somewhat on the drive towards quantification of results; while these outcomes are important, they can also diminish the validity of alternative measures that may resonate more powerfully among the constituencies with whom you’re organizing, like storytelling.Organizers and advocates know that it’s not enough just to work really hard, or even to get a lot of people to show up or get a lot of attention (although those are great things!). We have to be making progress towards the kinds of social changes our society so desperately needs. We have to hold ourselves accountable not just because we can get more money that way, or because it makes us look good, but because the marginalized communities with which we’re working have been sold inferior goods and services for far too long–they deserve to work with people who can and will deliver. We need to learn what we need to measure, and then measure it, and then not be afraid to shout that it works!

    Challenges in Evaluating Advocacy Part I

    Part II