Tag Archives: evaluation

Making sense of advocacy capacity assessments

If you haven’t already checked out Alliance for Justice’s new(ish) site, Bolder Advocacy, I’ll wait here while you go do that.

Regular posts about nonprofit advocacy news, interviews and profiles of changemakers in the nonprofit advocacy field (including foundations, community organizers, nonprofit lobbyists), all of their valuable materials on the legalities of nonprofit work in ballot measures, electoral activity, lobbying, and broader social change…

and a revised version of their Advocacy Capacity Assessment, which I have now used in practice with several nonprofit organizations here in Kansas.

It’s certainly not the only good capacity measure out there, and, indeed, there are others that have some features that I really appreciate. There’s a lot to like about AFJ’s, especially this newer version, which has ‘advanced’ options for organizations whose advocacy is a bit more well-developed, and the ability to compare an organization’s assessment against an aggregate, thanks to the free access to their tool and the categorization and clustering their site does behind-the-scenes.

This post is not an evaluation of the evaluation tools, though, but, instead, some thoughts on advocacy capacity, and the assessment thereof, culled from my work in advocacy capacity-building over the past year.

I’d love to hear from anyone who has used AFJ’s tool, or another advocacy capacity measure, about what they found helpful, and not. Similarly, if you’ve embarked on an advocacy capacity-building process, what reflections can you share? Next week, I’ll link to some case studies of organizations with which I worked on an advocacy capacity technical assistance project. Their experiences, I believe, hold a lot of lessons for we capacity-builders, for organizations committed to advancing their own capacity, and for the foundations that make this work possible.

Today, though, some thoughts on baselines–how we know what we need to do–and on using advocacy capacity assessments to measure our progress towards that goal of ‘capacity’, with, perhaps, some thinking about what capacity is, and why it matters so much, anyway.

  • Partners matter: One of the things that I appreciate most about the new version of the AFJ assessment is that it includes an option for “relying on partners”, when asking organizations about their abilities in specific areas. This isn’t a liability, but, instead, reflects a sophisticated understanding of the capacities of partners and how to leverage them to complement organizations’ own strengths. We’ll only get truly strong fields when we stop leading organizations to believe that they need to possess all of what they need for advocacy success themselves. We need a field lens, and this type of capacity assessment–asking organizations to think about how they rely on others and how they can build on those alliances–takes steps in that direction.
  • Measuring adaptive capacity is tough: The AFJ capacity assessment has a few different questions designed to get at the concept of adaptive capacity–how well organizations can read their environments and adjust their strategies accordingly. This is laudable, but it’s still somewhat elusive, I think. When I talk with organizations, adaptive capacity is their goal, but it is somewhat hard to grasp, both because getting that ‘read’ on the environment can be difficult, and because few advocates have structures that are adequate to facilitate quick responses to changes in that context, even when they know that should be their aim.
  • The how matters: I have used advocacy capacity assessments with organizations where only one individual completes the assessment, and where multiple actors complete it. In my experience, that process makes a difference, in terms of how capacity assessment can serve to catalyze thinking, within an organization, about where you stand and where you want to go. I know that it’s not easy to get Board members and other key stakeholders to sit down and fill out an assessment that takes 30-45 minutes. But, really, if we can’t get that much buy-in around questions of how to position our organizations for advocacy, how can we get buy-in to take the steps that move us to where we want to be, in terms of advocacy?
  • Numbers don’t matter, much: When I’ve had organizations complete the Advocacy Capacity assessment, there’s a strong temptation to focus on the ‘score’. How many points did we get? How does that compare to others? And, I get that. It’s not that the numbers don’t matter, of course; it can be really helpful to have a sense of where we stand, within our sectors, and, especially, of where we’ve come. But, as I’ve said before, organizations can have very highly developed capacity and still not be deploying it strategically. Conversely, there are organizations that can be limping along, without some of the key investments we consider crucial, but still accumulating advocacy successes. Maybe not sustainably, but still. The important point is that the numbers are relative, and that the scores don’t mean as much as the analysis of how different elements of capacity build on each other, how organizations can invest in their capacities, and how to make sure that capacity translates into real advocacy ability and will.

What have you learned from participating in capacity assessments? What is your reaction to this tool? What do you wish existed, in terms of advocacy capacity measures? And how do you use these tools to spark conversations and build momentum, for advocacy, within your organizations?

Evaluating Advocacy, for us

I mentioned the Kansas Advocacy Evaluation Collaborative the other day; one of the super-cool parts about it is that I get to work with the Center for Evaluation Innovation, including the really smart and incredibly fun Tanya Beer.

On top of that, I’m really encouraged by the way that advocacy evaluation–and the reason for building evaluation capacity–is introduced to these health advocates, all of whom are very busy and would be justified in not really jumping on the ‘please do this, too’ train.

It’s about figuring out how to advocate better.

I mean, yes, there are a lot of foundations in the room. So, yes, there’s the expected angst about their reporting requirements and how to explain a given advocacy effort in a way that it can gain foundation support, and how much do you share about strategy…all of those very real constraints that I cannot forget, just because I am lucky enough not to have to worry about them anymore.

But, last month, when I sat down to talk through, with the advocacy organizations in Kansas with whom I’ll be working most closely, to provide technical support, what they would like to assess, I was really, really excited by their responses.

One organization already has a quite sophisticated system for looking at the capacity and engagement of their grassroots allies; they rank them in terms of their commitment to the issues and progress them as the grassroots advocates move along a continuum of leadership. This way, they can figure out not only how to deploy people effectively, but also where to target their efforts for investing in specific people, specific issues, and specific capacities.

Very cool.

The organization expressed, however, that they lack this same capability when it comes to their partner organizations, even though they know that their coalitions, too, vary in terms of their capacity and their authentic connection to the issues. They recognize that the strength of their network relationships affect how they engage on a given issue, and the likelihood of their success.

On some of their issues, they are literally surrounded by fairly well-functioning coalitions, such that, taking a field capacity view, this organization does not need to be well-positioned on ALL of the essential elements of capacity, since there are others who can fill the inevitable gaps.

On other issues, though, they stand almost alone. Or, at least, there are few other voices raising the same nuanced angles on the issues that they do, which means that they can’t count on others to carry that work forward in quite the same way.

This calculus makes them particularly cognizant of the need for field capacity, so they want to look at a partner assessment or network mapping, to get a better sense of who has what, and can do what, on which issues. They may even want to solicit financial support for some of their partners, as part of a collaborative, in order to indirectly strengthen their own capacities. And they need help to know where to focus on partnerships that are particularly promising, but still weak, the same way that they do with individual allies.

The other organization with which I’ll primarily work is still fleshing out what they want to focus on, but the common theme is this:

If advocacy evaluation is going to work, really work, and not just be a series of hoops we jump through to please funders, then we have to do it for us.

We have to see evaluation as an opportunity to ask the questions to which we want answers. We have to construct evaluation methodologies that fit with our practices and our skills. We have to work on timelines that align with our advocacy campaigns. We have to produce results that we can digest, and act upon, and build from.

We have to see evaluation as a part of how we adjust our strategies, how we increase our power, and how we make our work stronger.

We have to want it, really, and see that we need it.

Because we do.

Assessing where you sit–the question of network centrality

One of the challenges in evaluating advocacy is really just a variation on a universal bane of researchers: the contamination by extraneous variables.

In advocacy, after all, there are so many different things that can impact the ‘success’ or ‘failure’ of an initiative, only some of which are remotely within the control of the entity being evaluated. Evaluators, then, are hesitant to ascribe too much of a given victory or defeat to the actions wielded by the organization/advocate, because, in so doing, they could be inadvertently inflating or deflating the true impact of the effort.

One of the most promising approaches to getting around the pesky reality that advocacy can’t happen in an isolated lab is the idea of network centrality.

Network centrality means measuring advocates, rather than the discrete advocacy effort–essentially looking at one angle of the adaptive capacity question. It requires determining the reputation of an advocate or organization within the network of allies and targets it needs to influence–today or over time–if it is to prevail in an advocacy campaign. It’s not necessarily easier to collect these assessments than measures of controlled cause-and-effect, and it’s as relatively subjective as most everything in the social sciences, but it’s tremendously valuable in predicting how one will be able to move within the advocacy context. It’s designed to work in real life and real time.

And you can use it to sort of self-assess, too.

Have you ever tabulated the policymaker targets with whom you have a close relationship–the ones that routinely ask you for information and turn to you for policy guidance? What about those who may not approach you, but who are very receptive when you initiate the exchange? What about looking at your coalition relationships–with which organizations are you in relationship, and how often do others look to you to lead an advocacy effort? How many entities within your community are aware of your policy priorities? How many would report that you are a trusted source on policy issues? How many visit your website to check out action alerts? How frequently do media contacts rely on you?

Have you asked?

Understanding how we move within the orbit that is our advocacy network, where we sit, and how others see us can give us valuable insights into how we can maneuver effectively within our context. It can reveal why we feel marginalized in some debates, point out where we need to invest more relational energy, and guide us towards new tactics to build our reputation with key stakeholders.

It’s not egotistical to want the power we need to get the changes we want for the people we serve. It’s not self-serving to spend time analyzing how we connect to those with whom we need to have influence, so that we can figure out how to better wield that influence in pursuit of justice.

It’s not about trying to make ourselves the sun.

It’s about making sure that we are in a position to shine.

In search of adaptive capacity

Advocacy evaluation has been both an academic and an applied pursuit of mine for the past three years or so now.

I’ve learned a lot about how to build accountability and shared learning into advocacy campaigns, how to measure the comparative impact of different approaches, and, most importantly, what to address in order to improve the effectiveness of a given advocacy organization.

I’ve learned a lot about what doesn’t predict advocacy success, too: sheer organizational size or budget, ‘inside’ connections to decision-makers, expertise in a given issue area, even the number of grassroots activists.

All those things matter, certainly.

But what seems to matter more than anything, and what captures, in a way, the effect of knowing how to bring those elements and others together into a cohesive and potent whole, is adaptive capacity.

It’s an idea borrowed from systems theory, and I’m certainly not the first one to connect it to the task of advocacy evaluation.

It’s one of those beautiful theories that makes intuitive, as well as empirical, sense.

Wouldn’t we expect that, in the advocacy realm, those organizations that can adapt to a continually changing environment would see the greatest success, especially over time?

The challenge, then, for nonprofits engaging in advocacy (and, we assume, wanting to win!) is to figure out what adaptive capacity looks like, for them, and how to build it. Otherwise, you get organizations that build the same kind of campaign, or deploy the same kinds of messages, or use the very same tactics, no matter the context, which yields victories (not surprisingly) only when the actions are well-suited to the demands of the situation. When they’re not, well…that “successful” advocacy organization can find itself losing, a lot.

It means understanding how to really diagnose the political context–not just who’s in power, but who has influence over them, and where there are openings and how to leverage your assets to push through them. It means fully reflecting on the victories and failures of each advocacy effort, so that you can learn from one experience to figure out what it means for the next. It means compensating for the areas where your capacity is less than you would hope, by optimally wielding the capacity you do possess. It means building your skills, and knowledge, and relationships–not so that you can “master” advocacy, as though it was a one-time achievement–but so that you can increase your versatility and pivot to different strategies as the situation demands.

This understanding, as fairly evident as it appears at this point in the development of the field, presents a different nuance to the pursuit of advocacy capacity. Now, we know that what we should be working towards isn’t just a bigger organization or better connections or a stronger base…it’s the elements that are particularly essential in this specific environment, and maybe the one that we think is headed for us next. That might require going after any one of those assets, or all of them, or something else entirely. And it will definitely demand that we know when to pull out which tools, and in what combination, and that we never stop scanning around ourselves to see how what we’re doing might not fit with what is needed.

Adaptive capacity:

A fancy way of saying “able to win again and again and again and again.”

Doesn’t that sound good?

Execution Matters: Evaluation and Getting Advocacy Right

In this final post on The Future of Nonprofits, I want to focus on a key point from early in the book, about how many (perhaps most) nonprofit organizations and their leaders are far better at coming up with creative approaches to solving problems (and accomplishing their core missions) than executing those ideas consistently and effectively.

It’s not meant to be a total bash on nonprofits and their employees; really, given how under-resourced most nonprofits are (related to our pathological aversion to investing in “overhead”–those important functions that, in fact, enable our programs and services to succeed) and how much of our working hours we spend trying to sort of keep our heads above water and look effective (whether or not we really know what “effective” would look like in our specific context), it’s not surprising that we seldom have the chance to step back and think about the kinds of processes and structures within our organizations (and our own workdays) that would raise our execution ability to a standard of excellence.

Instead, we’re always trying to do more with less (except when we’re admittedly doing less with our less, and busy making excuses for that). We stop doing some of what we should be doing, and close ourselves off to the possibilities of what we could be doing, in ways that mean, somewhat paradoxically, that we have to keep coming up with new inventions to increase our creativity, in order to compensate for how poorly we’re managing to pull off what it is that we do.

Exhausted yet?

In the advocacy context specifically, I see this when we develop campaigns that drift towards a new gimmick, or rely excessively on a particular technology, as though those are the tricks that will deliver the outcome we seek. We’re continually trying to one-up ourselves in terms of a slogan or a media event or a high-profile endorser, when what we should really focus on is hiring really good organizers, or investing in our relationships with our constituents, or personally connecting with every target policymaker (or all of the above). Or, we jump from issue to issue, diluting our potency and confusing our targets, lulling ourselves with the truth that “there are so many important causes out there.”

The Future of Nonprofits has a chart that shows how we can increase our aggregate impact either by raising our execution ability, even rather modestly, or by dramatically expanding our pool of creative ideas. There’s arguably a need for both. Given limited resources, though, it’s much more efficient to focus on marginally improving our delivery, especially because it can ripple into other areas of our organizational functioning, in terms of relationships built and skills enhanced.

And, so, what would it take to improve our execution in the advocacy arena?

First, we have to rigorously evaluate what it is that we’re doing: what isn’t working, and what is, and what really tips the balance. We have to identify our organizational and individual advocacy capacities, build up the areas where we are weakest, and develop benchmarks for what we should be delivering. We need to fully investigate where our own efforts have fallen short before assuming that our advocacy failures are to be blamed on adverse political or economic conditions. We need transparency and accountability for what our campaigns set out to do, just as we do in the fundraising and direct services arenas.

And we need organizational cultures committed not just to innovation, and not just to advocacy, but to excellence, and to intellectual honesty about how well we’re executing our most core programmatic functions, too.

A few weeks ago, I was reading an article about advocacy evaluation when a Board member for one of the organizations for which I do consulting (we were volunteering at the same event, and I’m never without reading material) looked over my shoulder. She shook her head at the article’s premise that the field of advocac evaluation is far behind that of traditional programmatic assessment, and I think that her critique is largely valid: too often, our obvious good works, in the nonprofit sector, excuse the fact that they’re not always done well.

In advocacy as in the rest of our endeavors, that’s an oversight we cannot afford.

In the new year, we may find that we don’t need to continually come up with as many new strategies or “innovative” approaches, if we are consistently doing what we do very, very well. And implementing evaluation systems that allow us–no, require us–to know when that is the case.

We won’t have to take as many shots, in other words, if we can hit them when we need to.

Evaluating Advocacy, de nuevo

It’s “update” week at Classroom to Capitol.

As I read through previous posts for my summer maternity break hiatus, I found a few that I really wanted to revisit, rather than repost. This is the last of the three that I have chosen for this week, with new thoughts and, of course, new questions.

One of my academic interests over the past couple of years has related to questions of how we evaluate advocacy efforts: How do we know advocacy “success”, short of absolute policy change, so that we can build on it? How can we assess organizational capacity for advocacy (to have a better sense of who will succeed, and also to know where to invest)? What kinds of interim goals should form part of an advocacy strategy, and what kinds of benchmark measures should mark our progress?

Over the past year, I’ve had the chance to apply my study and training in this area to practice through work with the Sunflower Foundation and its advocacy initiatives. It’s tremendously rewarding to be able to not only help individual advocates and nonprofit organizations seeking to develop an advocacy voice figure out how they’ll gauge their work, but also to be part of this evolving field and to work alongside a funder investing so much energy in contributing to good practice around these questions, too.

I love it.

More recently, my work with the Sunflower Foundation has allowed me to contribute to some of the Alliance for Justice’s conversations about how they evaluate advocacy, both on the front end (in terms of organizational capacity) and as advocates and their donors seek to determine the relative impact of different advocacy strategies. I’m very excited about AFJ’s revised advocacy capacity tool, which will be available online soon, and particularly about their approach to this work, which is aimed at getting as many organizations as possible to evaluate their own capacity (in a variety of areas; it’s a pretty thorough look at the inputs that we believe position an organization to succeed in advocacy) in order to build the field of knowledge about what makes a difference in ultimate advocacy success.

In Kansas, our hope is to eventually be able to help a given nonprofit organization know where it sits, on some of these capacity measures, compared to an aggregate of its peers, and also to develop strategies that are at least likely to lead to enhanced capacity in those same areas, so that we can build a strong cadre of advocate organizations across the geography and in different fields.

Refining these measures, and these tools, is important not just because we want to know what works in advocacy (so that we can get better and better and win more and more often), but also because being able to demonstrate how our theory of change is leading to tangible results should push more funders to feel comfortable supporting advocacy (or, at least, to expose that their real fears are taking a stand on controversial issues, and we need to know that, too!). We’ve come quite far in the past few years, such that advocates are no longer left to flounder to come up with benchmarks, and no longer grasping for what might make sense for measurement. It’s tremendously exciting, for the academic side of me, but especially for the promise that these tools hold in making our advocacy more robust, more acclaimed, and, ultimately, more integrated into what nonprofit organizations do all day.

And it’s great to be part of it.

If your organization is interested in advocacy evaluation and/or assessing your organizational capacity for advocacy, we should talk! I’d love to connect you to resources and (full disclosure!) include you in some of our field-building efforts, too. Because once we know what works, we just have to gather the courage to go after the money to do it.

And, then, we’re unstoppable.

Evaluating Advocacy: Of jumping hoops and learning loops

photo credit, 2007 Powwow, Smithsonian Institution via Flickr Creative Commons

If you haven’t commented yet this week, this post is your last chance! (Except, of course, that you can go back to post on one of the other two!). Tomorrow, I’ll announce the winner of the free copy of The Networked Nonprofit!

There is a lot of content in the book about how organizations can, and should, approach social media as a sort of experiment, building in mechanisms that will help them to learn quickly, and well, from what they’re trying, so that they can modify it as needed. They stress a real intentionality in this approach, an emphasis, from the very beginning, on defining what it is that we hope to accomplish, and the measures that we’ll use to help us get there. They also create space, though, for different organizations (or even different campaigns within the same organization) to define “success” differently, and they caution against reducing social media to a mere numbers game.

As I wrap up a contract evaluating an advocacy initiative for a foundation here in Kansas, and continue my reading, speaking, and contemplating about how to evaluate advocacy, and why such evaluation is so important, there is a lot from the evaluation discussion in The Networked Nonprofit that I believe applies to this endeavor of advocacy evaluation, too.

Foremost is the idea that evaluation should be actionable, that is, evaluation should give practitioners real information that they can really use, and be imminently valuable to them as a real-time check on what they’re trying. Having such information not only improves practitioners’ ability to change what’s not working, but also serves to increase organizations’ willingness to take risks (like trying advocacy or social media), because there’s comfort in knowing that we’ll be able to tell what’s working and what’s not.

They call this “learning loops”, and the way that they talk about it will sound very appealing, I believe, to anyone who has participated in the “other” kind of evaluation–that which is designed by a third party to meet a donor’s, not the constituents’ or the practitioners’, needs for information, that which produces a bound report years after anyone stopped caring (or even remembering) what is being evaluated, and that which uses criteria that don’t remotely resemble ‘success’ according to the perspectives of those really doing the work.

The details on learning loops, below, come from Kanter’s work, but this is my conceptualization of how the idea applies to advocacy evaluation, and how it differs from “traditional” evaluation.

  • Learning loops emphasize planning for evaluation from the beginning, involving stakeholders in defining success and choosing measures, rather than tacking an evaluation study on at the end.
  • Learning loops provide real-time information, so that it can be applied to change course mid-stream. Organizations take a few hours every month to ask themselves questions about what’s working and what’s not, and they adjust workplans and even strategic goals to account for what they’re learning.
  • Practitioners collect the data that feed the learning loops, and they help to interpret them. They measure engagement (who’s connecting with our work, and what are they saying about that connection?), return on investment (the traction that they’re getting from specific tactics, and which ones deserve more attention), and social change (what is actually getting better about the problems that concern us).
  • Participants engage in a process of reflection as a part of the learning loop; the priority is on really learning something from the evaluation endeavor, and there’s a recognition that we learn best when we have a chance to process with others.
  • Learning loops use low-cost, relatively low-risk experiments, to test assumptions and begin the process of organizational change, as a prelude to lasting social change, rather than waiting until the end of an expensive and lengthy activity to see if it worked.

    There is still a lot that’s hard about evaluating advocacy, and there are still a lot of variables that impinge on our ability to measure precisely the impact of our interventions.

    Still, this kind of advocacy evaluation, woven seamlessly into the practice of advocacy itself, holds tremendous promise for overcoming our collective resistant to the idea and, therefore, beginning to build a body of knowledge that will help us get better at doing advocacy evaluation.

    And it starts with changing how we think about evaluation, not as a hoop through which some funder says we must jump, but instead as a part of the process of social change, and one that gives us another tool through which to improve our work.

    If you’ve been a participant in either approach to evaluation, especially evaluating advocacy or social media efforts, how were those experiences? How might you implement learning loops in your organization, specifically in your advocacy? How does this change how you think about evaluation?

  • What does civic engagement look like, really?

    photo credit, Library of Congress, via Flickr Commons


    Social workers, especially us “macro” types, use a lot of pretty fuzzy language sometimes. What does “empowerment” really mean after all? How do we know effective advocacy when we see it?

    And what, really, is “civic engagement”, and how in the world do we measure that?

    Answering this question is important not just because it’s never a good idea to spend energy talking about something without really having any idea what we’re actually talking about, but also because defining and measuring and evaluating our civic engagement work is about accountability and integrity, which, after all, are some of the goals towards which our civic engagement work is focused in the first place.

    We know that civic engagement is far more than getting people registered to vote, or even than getting them to the polls. I remember a course that I took from Ernesto Cortes, of the Industrial Areas Foundation, in graduate school, and how he talked about how reducing civic engagement, and the exercise of our citizenship, to voting alone, really makes it essentially another aspect of consumerism–choosing between this or that preformulated option, which, of course, isn’t very engaging at all.

    But the other stuff, beyond voting, is even harder to measure and truly conceptualize: what does it look like to be authentically involved in the governance of one’s own community, or one’s own life, and how do we begin to track and evaluate that engagement on a broad scale?

    The folks at the Building Movement Project (I know, I knowI’m a bit obsessed) have a new paper, Evidence of Change, which discusses evaluating civic engagement efforts and, I believe, offers, if not a roadmap, at least some sparks of guidance for organizations trying to be clear about their goals in this client empowerment work and, ultimately, demonstrate its tangible value.

    I’ve been thinking a lot about this, because I really believe that there are particular opportunities for advancement of advocacy and civic engagement as legitimate activities, and, really, core strategies, of social service nonprofit organizations, but we’ll never solidify a place for them if we can’t figure out how to assess and communicate about what we’re doing, and why it matters.

    Some of the new insights for me from this, most recent, discussion:

  • We can’t measure civic engagement by looking only at the individuals (for example, our clients) involved; truly meaningful civic engagement should be transformative not just for those people, but also for the “host” organization’s capacity for social change, and for the society and institutional structures their engagement is aimed at changing.
  • Rigorously evaluating civic engagement work requires, for many nonprofit social service organizations, TWO significant culture shifts–first towards this kind of empowerment work as a core part of the agency’s operations, and second towards seeing formal evaluation as integral to the organization’s mission. No wonder it’s so hard, and so rare.
  • Just as we’re still in the process of developing new models of social service organizations that integrate advocacy and civic engagement in their direct service work, so, too, do we need to develop new models of evaluation, able to meet the demands of these kinds of nonlinear change processes. And we need the space, within academia and especially philanthropy, for these new evaluation methods to gain legitimacy.

    So getting out the vote among our clients and allies is obviously important. And being able to quantify the electoral impact of our work, and how it changes conversations about the issues we care about, is important in garnering the resources we’ll need to support its continuation. Absolutely.

    But we want more for those we have the honor to serve than a choice between candidate A and candidate B. We want them to be more than consumers–we see them and know them as stakeholders, capable of helping to build the kind of society we want for all of us.

    And that takes the kind of civic engagement that moves mountains.

    So we’d better be ready to measure how far they’ve come.

  • Confessions of a Nonprofit Consultant

    I’m just about one year after having commenced my nonprofit consulting work in earnest, and I can sincerely say that I feel…ambivalent about it.

    Especially in this economy, I’m asked pretty frequently by students about my consulting work, and the kinds of opportunities it provides. I tell them pretty unequivocably that it’s not where I’d recommend starting a career; both because the connections that I have make it feasible for me to build a job out of it, and because I really needed the legitimacy and structure that an organizational home offers, at one point.

    But, for me, at this point in my life, it makes a tremendous amount of sense: I get to work with organizations I care about, contribute to research and policy on a variety of causes, and have the flexibility to be a more hands-on parent during the day.

    But, to me, my consulting work, and whether or not it’s “working” has to be about a lot more than me, and my scheduling preferences. And, so, I guess as an exercise for me as much as anything, I’ve been thinking about what I like about it, in terms of my interactions with organizations, and what prompts some of those more complex emotions.

    First, the completely excellent:

  • 1. I’m relatively unbound by resource limitations and pragmatic organizational constraints. I get to make the suggestions and offer the critiques that I find most compelling, without having to always worry first whether they’re totally feasible.
  • 2. I build organizational capacity. My favorite work is doing training, providing materials, answering questions–building up the staff, Board members, and volunteers who work directly with clients and the social problems that plague us, and increasing their ability to make a difference in their communities.
  • 3. I have space in my life and legal and political latitude to be overtly political, when it’s necessary–I represent only myself, on a daily basis, and not always a 501(c)3 organization.
  • 4. I can tailor my work to an organization’s actual needs, in every way from how I communicate with staff to how much I charge to how much power and control they retain over the process.

    And, what I’m still grappling with:

  • 1. I seldom get to see things through to any sort of real implementation or follow up. There’s obviously never ‘completion’ in any social change work, but a lot of my consulting involves drafting recommendations, putting new processes in place, creating templates…without much knowledge of the degree to which those tools will really be used to effect change at the organization.
  • 2. That same freedom, to be divorced from resource and practical constraints, can mean that my recommendations seem, well, divorced from reality. A best practice is only a best practice, after all, if it can really be practiced, and I know that I sometimes set standards that don’t make sense in the daily lives of agency personnel.
  • 3. I have some real ambivalence about how my work promotes the idea of unmooring workers from an organizational context; given how critical I am of the increasing shift of risk to employees, and away from institutions, I’m very aware of how I’m contributing to this very trend. In some cases, this is less problematic, because I know that the work I’m performing wouldn’t be done at all if not for me, but, in others, I wonder if I’m not making it possible for organizations to avoid hiring regular, full-time staff for some of these functions.
  • 4. And, finally, I’m very cognizant of needing to always keep my work focused on building up the organization, rather than ever seeming like someone who swoops in, provides “expert” advice, and then leaves. That’s critical not just because of how we know change becomes institutionalized within an organization, but also because of my fundamental belief in empowerment.

    I’d love to hear from other nonprofit consultants (and it’s certainly a growing industry!) about how you balance these tensions, and what you see as the most rewarding and most challenging parts of this work. And, nonprofit leaders who have worked with consultants, what do you look for in a consultant? What contributions do consultants make to your organization that truly enrich your work, and how do they fall short of this goal?