Tag Archives: nonprofit organizations

Good decision making in social service organizations

Does your nonprofit organizations make good decisions?

Consistently, over time, when it counts, in ways that contribute to impact?

How do you know?

And how do you establish processes that make it more likely that you keep making good decisions, to drive towards your vision of change?

I’m a little obsessed with these questions right now, contemplating what distinguishes nonprofit organizations that thrive–and bring impact along with them–from those that sort of muddle through or coast–failing to make the mark that they could.

I have been thinking about this a lot more since reading Decisive, and I’ve looked at the organizations with which I’ve been working most closely over the past few years, through the lens of that analysis, for patterns and ideas about how to catalyze better decision making.

But I’m also very interested in your experiences and your practices, to drive good decision making.

What works for you, what have you learned, and what are you willing to share?

  • You need good information for good decisions: It sounds obvious, I know, but there are still many organizations without much evaluation capacity, especially in advocacy, and with few channels to systematically collect and, even more importantly, interpret, the information they need. This has to be zoomed in and out, too; you need base rates and big-picture data, but you also need stories and ‘texture’, to complete the picture of what is really going on with your organization and what you really must know. Without intentional methods through which to gather and act on this information, it won’t happen serendipitously.
  • Organizational culture matters: Organizations need a climate where people aren’t afraid to experiment and dissent, if they are to get good decisions over time. Maybe our nonprofits should have ‘failure of the year’ contests, where we celebrate the little failures that, collectively, can inform our futures? Maybe we need to think about how to institutionalize the devil’s advocate roles that must be part of our conversations.
  • Adaptive capacity is essential: We have to scan not just our own processes and histories, but also the landscape, if we are to have a chance to succeed not just in today’s context, but tomorrow’s, too. That means developing listening channels that help us to understand what other organizations are doing, what social indicators are telling us, and what our best predictions suggest is coming.
  • We have to recognize choices when they are present: There’s so much inertia in our lives, and our organizations are no different. To combat this, organizations need to know when to get off auto-pilot, so that we don’t limp through opportunities to make decisive changes. Not acting is an action, as I tell my students every semester–in advocacy and in nonprofit governance–and so we need to recognize when we’re faced with a decision point.

What are your techniques for making good decisions? What guidance would you share with others? What really excellent decisions have you made, especially if they weren’t immediately recognizable as such? What not-so-great decisions have you made, and what led to those?

Devils and good decisions

One of my favorite insights from Decisive relates to the importance of diverging opinions in crafting good decisions.

I think this is incredibly important, perhaps especially when organizations–like the nonprofits with which I normally work–are embarking on new journeys, including the effort to integrate advocacy into their work. I would much rather have someone asking critical questions, even really difficult ones, than have the organization coast along, having failed to adequately account for the potential risks of their positions and to prepare to articulate the rationale behind the transition.

I think ‘devil’s advocate’ role is a good fit for social workers, in particular.

  • We tend to care deeply about our organizations and its purpose, which is essential; someone who wants to see the organization fail, or who just delights in ‘blowing stuff up’ cannot possibly play the same productive role.
  • Our excellent communications skills can help us to articulate our concerns within the context of a desire to serve the organization, with attention to the socio-emotional consequences of challenging people’s ideas.
  • Our relationships with peers can help us to surface others’ contrary arguments, too, when they may not be willing or able to voice them themselves, enriching the process.

But although criticism is a ‘noble function’ (p. 97), it’s one that can’t be filled regularly by the same person, if the process is to work well.

If someone always plays the ‘no’ card, that person will be marginalized within the organization. And that’s a lot of pressure to put on one person, too, especially given the reality of power imbalances within any organization.

What we need, instead, are structures that create space for devil’s advocates to work, and, indeed, that encourage them.

I’ve seen this in some of the organizations with which I work: the nonprofit whose staff surfaced an internal agency policy as the first target of their advocacy agenda, and got an encouraging ‘go ahead’ from the leadership, who acknowledged that there was an authentic interest in finding better solutions than what they had initially crafted; the agency whose practicum students are invited to share candid feedback about the organization after grades have been posted and recommendations written; the organization that started our advocacy TA process with meetings with every departmental team, sharing the proposed work plan and giving people a chance to veto particular projects and suggest new directions (time-consuming, yes, but buy-in afterwards was through the roof).

These organizational practices–and, indeed, they have to be practiced to become ingrained–take the onus of being the ‘devil’ off any one person or even department (even we skilled social workers who are a bit less conflict-averse, I think, than many professions) and, instead, enshrine it in the agency’s operations.

We invite the ‘devil’ to sit down at the decision-making table with us…and we are the better for it.

Between A and B

Photo credit RyanBSchultz, Creative Commons license via Flickr

Photo credit RyanBSchultz, Creative Commons license via Flickr

I have my Dad to thank for some of my best life lessons:

  • “Don’t ever leave your car door open when the vehicle is running.” (it’s just a bad idea, people)
  • “Choose the right partner and the rest of life’s decisions will be easier.” (as I’m drinking the mocha my husband just made me, listening to him clean the kitchen)
  • “Never choose yes or no on ‘A’–you always want to choose between A and B.”

It’s that last one that I thought of while reading Decisive.

The book asserts that ‘whether or not’ decisions fail 52% of the time long term, compared to 32% of decisions between two or more alternatives.

Just like my Dad said.

But in advocacy, how often do we skip straight to our preferred option, often forgetting that there even are alternatives, and then wonder why we are unsuccessful in getting people to sign up for our ‘choice’…or why it doesn’t work as well as we had envisioned, even if we can get it through?

What would it look like, instead, if we crafted our advocacy such that policymakers can choose between two options, instead of asking them to say yes or no to one?

They would have to be real options, not just shams designed to make our ‘choice’ look better by comparison.

Which would mean that we would have to envision actual, viable, even desirable alternatives to our ‘pet’ approach.

Not easy to do, especially when we have invested so much, so often, in a particular route to change. Decisive addresses that by encouraging multitracking, so that we don’t become so wedded to a particular idea that we take any criticism of it completely personally (p. 55).

I am doing a lot of multitracking right now for my advocacy work around improving college outcomes for low-income students. Should we push for changes to Pell Grants such that they offer ‘early commitments’ to students whose trajectories could be influenced by the knowledge that assets are set aside for them? Encourage financial institutions to offer college savings accounts with low minimum balances? Work with states to pass progressive elements in their 529 plans? Reform student loans so that they do not strip as much wealth out of graduates’ households? Increase funding to public institutions to constrain tuition increases? Make tax policies refundable, so that low-income filers can benefit?

Yes and yes and yes and yes.

Because, really, what we want is the problem solved, right?

Is it that devastating if policymakers want to solve it in a different way than we might initially prefer, really? Are we so certain that our first option is the best?

One of the mental exercises suggested in Decisive, to help generate options, is to imagine that you can’t have the policy you’re advocating for today (p. 47).

What would you want to happen then? What’s your ‘B’?

And how could you use this to generate options that may bring unlikely allies to your side? And to salvage victory from the precipice of defeat? And to test innovations that just might yield some significant impacts?

Thanks, Dad.

And, no, I never buy those warranty protection packages.

I promise.

More ‘ands’

It seems like everywhere I go, I am encouraged to ‘say no’.

Setting boundaries for my kids, paring down my to-do list, retreating from commitments in order to reduce my stress.

But I’m just not sure it’s for me.

I think that some of my greatest distress comes from not saying ‘yes’ enough.

The guilt I feel while working at night, remembering when my kids wanted to get all their swim stuff down in the garage (for some reason) and I didn’t stop what I was doing to make it happen. Hearing about the political discussion that I didn’t attend and wishing I had been there. Missing friends I haven’t seen. Wondering if I wouldn’t have been better off pushing a little harder, to do a little more.

Because less sometimes means missing out on valuable opportunities. Sometimes what we’re saving ourselves from is perceived strain, and what we’re really denying ourselves are exciting options.

I think this is true from our nonprofit organizations, too.

I’m not denying that organizations can’t get overburdened, or that mission drift isn’t real, or that nonprofit leadership doesn’t need to be sensitive to reasonable workloads and meaningful investment in staff well-being.

But we tend to operate from a scarcity mentality, assuming that any new thing we take on has to mean giving something else up, despite evidence that, for example, expanding services to a new area can mean new donors and new volunteers, such that overall capacity is enhanced, or that adding critical services in one area can improve outcomes of another service, rendered somewhere else.

We almost always ask, when faced with choices about how to proceed, “Should we do this OR that?” when the best question may be “How can we position ourselves to do both?”

We almost always assume that every resource we possess is finite, despite knowing in our core that human potential is anything but.

And, in an effort to ensure that we aren’t taking on too much, we may end up doing too little, and denying ourselves the chance to say ‘and’.

501(c)4s: Serving a valuable public purpose

I have to get back to the hard work of coming up with my own content next week (!), but here’s one more borrowing from a really fascinating conversation on the New York Times opinion page, about whether the controversy over the IRS’ additional scrutiny for Tea Party and other conservative groups suggests that 501(c)4 organizations do not actually serve a legitimate public good and, therefore, do not deserve tax-exempt status.

You can read the debate for yourself, but I certainly agree with the commentator who argues for preserving the tax status of 501(c)4s, stating noting that, while organizations like The Sierra Club and AARP “are too politically engaged to be charities, yet they work toward what each believes will be a better world.”

But I think the larger question is this:

Why are organizations like AARP too politically engaged to be charities?

Why do we have such strict limits on nonprofit political engagement that we are so quick to rule that an organization that undeniably serves a public purpose–even if I do not happen to completely agree with that vision–are not ‘charities’?

In debating whether organizations should be allowed to organize themselves as 501(c)4s, and whether that is a valid and valuable designation in our tax system, are we really asking the wrong question? Should we really be considering whether we unduly muzzle our 501(c)3 organizations, pushing, then, organizations clearly operating in the public good into the (c)4 realm, distorting that category and, maybe, making it more vulnerable to distortion, then?

I absolutely believe that public interest lobbying and political engagement are not only legitimate activities but, indeed, completely essential to the functioning of our democratic system, at least as currently structured. I believe that organizations should receive some harbor within the tax code for taking on that valuable work.

But I also think that fighting to end hunger is just as noble as handing out food, that working for better health care laws is just as important as taking care of those who are sick, and that speaking out about gender inequality is just as needed as sheltering those fleeing domestic violence.

If we agree, then maybe we need new provisions in the tax code to allow individuals who financially support that important work to receive the same tax advantages of those whose dollars fund more immediate relief.

Valuable public purposes all
, no?

Evaluation Capacity that Sticks

In honor of Labor Day, and with some grieving for the end of my summer, I’m fully embracing the contributions of others this week.

It takes a village to come up with these blog posts, I guess?

One of my projects this year is an advocacy evaluation capacity-building initiative, in partnership with TCC Group.

I have been really excited to get to work alongside their consultants–having spent a fair amount of time in TCC webinars, to co-present on advocacy evaluation with them is a real gift.

Recently, TCC distributed an article about some of their learning, from this project and others, about how to build evaluation capacity that truly transforms organizational practices, adding net capacity that transcends the period of intense consultant engagement.

It’s something we’ve been talking about a lot in the Kansas context, too: how do we ensure that we’re not just swooping in to do some evaluation with and for these organizations but, instead, helping them to build knowledge and integrate structures that will enable them to take on advocacy evaluation in a sustained and effective way?

A few points from the article and from my engagement with this project, that resonate more broadly, I think, in the consulting and capacity-building fields in general:

  • Organizations have a lot to learn from each other: The organizations in the cohort with which I’m working clamor for more time with each other. Consultants don’t have a lock on knowledge, and not all capacity-building happens within the confines of the consultant-grantee relationship.
  • Learning needs immediate application: One of the challenges with our Kansas project is that it started in the fall which meant that, by the time that organizations had outlined their evaluation questions and begun to select instruments, it was the legislative session and they had no time to implement their ideas. Learning not applied can atrophy quickly, and we’re considering how to restructure the calendar for future cycles with this in mind.
  • We need to acknowledge the resource/capacity link: Of course it’s easy to say that the way we build capacity is to add dollars. Of course. And there’s obviously not a 1:1 relationship between, in this example, evaluation capacity and organizational budgets. But it’s also true that we can learn everything there is to know and still be crippled, in significant ways, by scarce resources, which means that true, sustainable capacity building in any area of organizational functioning has to also take into account how we build organizational capacity. Period.

I believe in the process of helping nonprofit leaders ask good questions about what they’re doing, the impact that it’s having, and what they need to change.

And I want to ensure that they are positioned to keep asking those questions after I move on.

To make a real difference, it has to stick.

How would nonprofits fare, on trial?

480px-Trial_by_Jury_Usher

This post from White Courtesy Telephone described a scene at a philanthropy conference a few years ago, when a jury of the field’s peers ‘put philanthropy on trial’.

Prosecution and defense, both from the philanthropy world, presented evidence on either side of these critical questions:

“Was philanthropy, or was it not, underperforming in its quest to help create social change? Should it, or should it not, be convicted for its lackluster outcomes?”

And 10 out of the 12 audience members chosen to deliberate philanthropy’s fate voted to convict.

The post emphasizes that there was little discussion, afterwards, about the significance of that verdict, or about the evidence that jurors, respectively, found most persuasive, or about the criteria that should be used to determine the relative effectiveness of the field.

And, interestingly, there has never been a retrial.

I would encourage you to read the post; nonprofits and nonprofit advocates certainly have an interest in how philanthropists are debating these questions of impact, and how their perception of their progress in this area may speak to the need for changes in how foundations interact with their nonprofit grantees.

But I am wondering how a similar trial would go for our nonprofit social service sector, itself.

Should we be convicted for failing to make significant progress on some of the most pressing social problems of our day? Or should we be excused, given the increasing pressures put on the sector, and the abdication of government, in particular, regarding its responsibilities for the same?

By what criteria would we be gauged to be ‘succeeding’, or not, in our quest for impact?

Are there parts of our sector that would fare differently than others? Are organizations working in health care, for example, doing better than those combating poverty? Is it even possible to dissect our field this way?

Would certain voices in our sector be more critical than others? Has this role of internal critic fallen mostly to particular voices in the field today, or are some actors just positioned so as to make them more or less concerned about nonprofit performance?

How would you vote, as a juror deciding the fate of our sector? What evidence would you present, as a prosecutor or as the defense?

And how would you feel, as a defendant?

What if we were judged not by other nonprofit actors, but by our most important ‘peers’–the clients whose interactions with our organizations give us our legitimacy?

How would they judge your specific organization and the overall field with which they engage?

What might we learn from such an exercise? What do we stand to lose?

Policymaking for small failures

This one has been in my draft folder for awhile, while I spent the first part of the summer teaching and consulting and the month of July mostly playing.

One of my favorite bloggers anywhere, Beth Kanter, had a post on one of my favorite topics:

failure.

Specifically, how nonprofits can and should plan for ‘affordable losses or little bets to improve impact’.

Like everything she writes, it’s well worth reading.

But I am thinking about these small failures not in the nonprofit organizational context, as Beth so ably covers, but in terms of policymaking.

Because there’s a lot that we need to learn in that arena, too, and, so, a lot on which we need to fail.

Our hesitancy to risk policy innovations stems, I think, in large part from fear of failure, when such failures may be exactly what we need, as long as they are small enough and contained enough not to become disasters.

We don’t know, for example, all that much about what it’s going to take to stem the rise in obesity rates, but we have some ideas of things to try. The same thing is certainly true in addressing educational disparities, or combating addiction, or other vexing problems where we have many more questions than answers.

We need more research, yes, and analysis, but we also need to take some chances, with the understanding that we will scale those approaches that don’t fail and quickly abandon those that do.

Such deliberate failures require nimble structures, though, and courageous leaders.

And we don’t necessarily have those in abundance in our policymaking systems. I recognize that.

But I think it’s worth putting it out there as a valid aim, this goal of small failures and the context that would support them.

The pressing nature of our greatest social problems demands that we accept neither reiterations of the same policies that aren’t bringing the impact we need, or wholesale rejection of those approaches in favor of the next shiny thing that may or may not work any better.

Instead, we need to move boldly but modestly, testing and evaluating and adjusting and adopting or abandoning.

Small failures, in pursuit of big change.

Learning together, for advocacy

One of Beth Kanter’s posts on measurement within nonprofit organizations addressed the “data disconnect” between organizations and their funders.

She cites research that more than half of nonprofit leaders say that funders prioritize their data needs over nonprofits’ needs for information about their own work, an obviously concerning indicator, given what we know about the importance of data to inform good decision-making by nonprofits, in search of collective impact.

There are two key points from the post that I have been mulling over, especially as the Advocacy Evaluation Collaborative of which I am a part enters its second year.

First, it’s clear that nonprofit organizations want to use data more fully and more systematically to guide their work. Nonprofit leaders not only assert that, they are also dedicating some resources towards that end, which is probably even more clear proof that they’re serious. There are some real constraints, here, though, particularly the lack of financial resources, within most grants, specifically dedicated to evaluation. We see this in the policy world, too; there’s an assumption that, somehow, evaluation just ‘gets done’, when, in truth, there are often significant costs, in order to do it well. There is also some confusion about what, precisely, should be measured, but, to me, this isn’t as much a problem in the evaluation arena as in the context of pursuing impact itself. Because, really, once we’re clear about the changes we want/expect/hope to see come from a particular change strategy, it’s obvious what we’re going to measure: did those changes, in fact, happen? So, then, to the extent to which there is lack of clarity or even disagreement between organizations and funders about what should be measured, I think that reflects a larger chasm around what is supposed to be pursued.

Second, there is a risk that, as data are emphasized as part of the change process, there will be data collection for its own sake, with short shrift given to the analysis and utilization of data. And that’s a real problem, since, really, getting some data is not nearly as important as the ‘sense-making’ process–figuring out what the data are saying, and what to do about it, and what it all means. Especially when there are inadequate resources dedicated to evaluation, though, something will get squeezed and, if evaluation is conducted primarily in order to satisfy funders that it is, in fact, happening, then being able to produce data may be valued over really learning from the questions asked.

As I think back on this first year of working pretty closely with both advocacy evaluations and health funders in Kansas around advocacy evaluation, I’m relatively encouraged. There have been times when the process has seemed laborious, and I have felt particular kinship with the organizations, who have often struggled to make time to dedicate to the evaluation learning, in the midst of an especially tough policy climate in our state.

But I think we’re mostly getting these two critical pieces right, which is where I’m hopeful. There has been a lot of talk between funders and organizations about how to decide what to measure, and about the importance of agreeing, first, on a theory of change that is worthy of evaluation, and then letting the questions flow from that understanding of the impact that is supposed to occur. And the data collection piece is actually a fairly minor chunk of the overall workflow, with much more time spent on articulating those theories of change, analyzing data together, and finding ways to incorporate evaluation rather seamlessly into organizations’ flow of operations, in order to increase the likelihood that they do something with what they learn.

It’s that emphasis, I guess, which has made the difference: on learning, collaboratively, and on evaluation as a tool with which to move the work forward, rather than a hoop to jump through.

I don’t know how you take it to scale, really, this side-by-side process of those with money and those doing the work sitting down together to talk about what they need to learn and how to learn it. This process has only involved a few health funders in one state and six advocacy organizations, and it has still been pretty expensive and time-consuming. But maybe, through peer sharing, nonprofit organizations will come to demand this kind of transparency and collegiality. And foundations will come to expect that they can be part of the learning too. And, together, we can harness the power of inquiry to get better at what we are, together, trying to do.

Change the world.

KPIs and Advocacy Evaluation

I have been corresponding some with the advocacy evaluation folks at the Alliance for Justice, as they make some changes to their advocacy capacity evaluation tool.

In particular, we’ve been talking about how to help organizations put it all together–tie all of the components of advocacy capacity together, to leverage them for greater success–while also continually assessing where and how to develop additional capacity, in order to reach the organization’s full potential. I was thinking about those conversations, and our quest for measurement indicators that would help organizations gauge the extent to which they’re achieving that ideal, when I read Beth Kanter’s post about key performance indicators. She makes the analogy that measurement is like hooking up a big TV–you have to be able to look at each component individually, and then also figure out how they fit together. I have, perhaps rather obviously, never hooked up a television of any size, but I think I can still get the visual there.

And, certainly, I can see how having clear indicators–data points that serve as critical benchmarks by which to assess our work–can make clear a process of evaluation that would otherwise be quite overwhelming and murky.

But, I guess like hooking up the TV, I think it’s that ‘pulling it all together’ part that is the trickiest. I mean, not that identifying those Key Performance Indicators for the elements of advocacy capacity is necessarily a straightforward exercise itself, but, as AFJ and I are discussing regarding advocacy capacity evaluation, there is something nearly unquantifiable, sometimes, about taking those individual parts and putting them to work. That, to me, is the parallel between TV assembly and evaluation, perhaps especially in advocacy.

Here are some of the very tangible ways in which this becomes manifest in advocacy evaluation. What are your key performance indicators, for your advocacy? And how would you quantify that elusive ‘putting it all together’ element?

  • Ability to know which capacities to use when and how to coordinate them, for example, when it’s good to bring the field in and when it’s better to keep the issue quiet. This is the ability to translate capacity–which is really just a latent strength, until activated–into effective advocacy. It is also the will, within the organization, to commit capacity to effective advocacy, instead of reluctance to ‘expend’ resources for social change.
  • Judgment, about how to make decisions and how to wield the organization’s capacity within a dynamic environment. This relates to adaptive capacity, and the ability to respond to changing conditions, but it also defines how organizations can get the most impact from their advocacy capacity. It means deciding when to engage in which advocacy arenas, and what are appropriate goals, but it’s one of those things that is best observed in its absence, which can make the measurement hard. One of the judgments that nonprofit advocates have to make is how to learn lessons from previous advocacy encounters that can help to inform future negotiations, and future strategy planning, and then how to incorporate that learning into the next round of decisions. Advocacy evaluation, including the isolation of key performance indicators, clearly augments this capacity, in a virtuous cycle, but only if there’s intentional reflection around evaluation after action.
  • Ability to analyze how different elements of capacity build upon each other. This means, in the concept of advocacy evaluation, knowing that strengths in one area of capacity can improve functioning in another. You can leverage your coalition partners for indirect relationships to policymakers, and you can approach policy analysis in such a way as to strengthen your ability to take on judicial advocacy, for example. We sometimes see these as disparate elements, even dividing them up within advocating nonprofit organizations, which can limit organizations’ ability to get the greatest ‘synergy’ from their efforts.
  • Ability to leverage the overall capacity of the field, for greatest momentum. Advocates need to be able to figure out where the field’s gaps in advocacy capacity are, where there is an excess, perhaps, of a particular elements of capacity, and how organizations need to complement each others’ capacity, for maximum impact. So, if one organization isn’t very well-developed in media advocacy, for example, but there are others in the field with these relationships and skills in abundance, it may not be a failing of that organization not to have extensive capacity there, if there is good coordination with partners in the field, such that those particular areas are well ‘covered’ by someone.

We’re not at the point, yet, of having crystallized key performance indicators with which to measure these elements, but that’s where we need to be, ultimately. Without the understanding of how to tie elements of capacity together, we run the risk of having a television with lots of connected wires and, yet, still, not the clearest picture.

Connections matter, which no one understands like an advocating social worker.