Tag Archives: evaluation

A better measure for a better system

How should we measure ‘well-being’?

One of my intellectual interests relates to how evaluation and social indicators can focus our collective attention on the problems that need to be addressed, setting better benchmarks toward which we should aspire.

And one of my great passions is around reducing political, economic, and social inequality, to build toward a more just future.

And, here, these two worlds align.

Because we need some better measures of how we’re doing.

I don’t mean the U.S. poverty line, although clearly that needs to be revamped.

But, here, I’m thinking more of the underlying issue, not poverty but what creates the conditions for it.

We need a better measure than Gross Domestic Product per capita, because, clearly, an increase in GDP doesn’t always translate to an increase in well-being.

Look at how much more we spend on incarceration today, which is tied to an increase in GDP, when it’s clear that people aren’t benefiting from that particular outlay.

We have the Gini coefficient, which measures inequality, although, perhaps not surprisingly, it doesn’t hold much cachet with policymakers or even pundits in the U.S.

Something like the 20/20 ratio, which compares how well the bottom 20% are doing, compared to the top 20%, would be even more helpful, I think.

Or the Hoover index, which calculates how much redistribution would be needed to achieve total equality.

I’m certainly no economist–or mathematician–but an indicator that could clearly indicate a person’s likelihood of leaving poverty, or leaving the bottom 20% or so, could, if inserted into our understanding about our economic system, help to crack the myth of ‘rags to riches’.

So why do we use GDP per capita, when it so clearly fails to capture so much of what we really need to know, and distorts so much of the picture?

There are better measures out there, and we certainly have the technical capacity to shift to them, or even to develop something else, if we really wanted.

I can only conclude that our stubborn clinging to something woefully inadequate has much to do with how we come out looking relatively good according to that measure, and pretty blatantly unequal according to others.

If we’re not winning, after all, we can always move the goalposts.

But I think that, while metrics are surely not everything, having better measures would really help.

You manage to what you measure, after all, and, if we had some consensus about what we were working toward, we’d at least have a shot at getting there.

Advertisements

Measuring Social Impact

tape-measure-007

The Stanford Social Innovation Review had a special series on measuring social impact this spring, full of so many terrific insights that it took me quite awhile to sift through all of the articles and, then, compose my thoughts at least somewhat, to post here.

I’d love to discuss any of the pieces, and I welcome your responses to my reactions, too.

Above all, I’m very glad to see this conversation within this sphere; if we’re not asking what our true impact is, we’re missing the only metric that really matters:

Are we making the difference we intend, and that so desperately needs to be made?

  • It is somewhat disturbing, really, that an article entitled, “Listening to Those who Matter Most, the Beneficiaries” even still needs to be written. The article highlights some promising beneficiary feedback initiatives around the world, giving detailed descriptions of how the perspectives of students in struggling schools and of patients in health care settings are being used to inform program innovations. It is my hope that the challenges outlined and the case made for the advantages that accrue when participants (I like this term better than ‘beneficiaries’) actively shape activities can both help to push public policy in this direction, too. Then we can really get to impact.
  • There is a brief outline of a larger academic paper centering on how to evaluate the effectiveness of civic engagement and advocacy efforts. Importantly, it incorporates multiple stakeholder perspectives, but I am still dissatisfied; it feels, to me, too much like asking about participant ‘satisfaction’, which may or may not be a good proxy for efficacy, even in the context of civic engagement (which, after all, is designed to foster feelings of good will within the community).
  • My advocacy evaluation work focuses on using evaluation to improve performance, but we are often constrained by the inadequacies of our evaluation approaches to capture the rather elusive nature of advocacy and social change activities. This dynamic, between measuring to improve and improving measurement, is the subject of one of the articles. It mostly summarizes a workshop session related to evaluation, but I appreciate the inclusion of several specific and innovative approaches. Sometimes we have to get a bit ‘meta’, stepping back from our work in order to invest in the capacity to perform it better.

The folks at SSIR have been leading the field on the question of how to really define ‘impact’, and so it’s not their oversight, but I do think that we, collectively, need to spend more time within our organizations, our profession, and our field really clarifying what impact means, and what it looks like, in order to ensure that we will, indeed, know it when we see it.

But maybe approaching it from this direction–how can we measure it, before we are necessarily sure what it is, should offer some appeal.

If one of the reasons we have excused ourselves from getting serious about setting the bar for ‘impact’ accurately has been that we don’t know how we will be able to know when we’ve reached it, then perhaps addressing the latter will light a fire under us for the former.

Are we aiming for the wrong goal? Culture change and social justice

One of the blogs I really enjoy, even though it’s very challenging, is White Courtesy Telephone. A post from their archives, which I recently found, has me thinking about cultural change efforts as essential to social and policy change, and what that understanding–that, to change the policies that impact our lives, we have to change how people feel, not just about those policies, but about the people we serve–would mean for the kind of advocacy campaigns I help organizations design and execute.

Do we need to make cultural change our goal, rather than policy change?

What kinds of strategies and inputs do we need to pull that off? And how well positioned are we to embark on that work, today?

This tension (not always that tense, but certainly there are currents there) is playing out today in the immigration policy world, where I still spend a fair amount of my time.

There are those who focus most of their efforts on promoting greater communication and mutual understanding between immigrants and others in the U.S. I have a ton of respect for their work and, indeed, I think that it can promote systems change (in schools, workplaces, local governments) directly connected to how immigrants experience social policies and, ultimately, to the quality of their lives.

And then there are those of us more explicitly focused on legislative change, in our state legislatures, where we’re mostly playing defense, and in Congress, where the ongoing battle for comprehensive immigration reform challenges our capacity.

And, really, it shouldn’t be ‘either/or’, of course.

We need better policies, yesterday.

And, to get there, we need to change the conversations about the issues we care about, and to engage and activate latent supporters by cultivating a culture of solidarity and a climate of urgency.

Totally.

But, as the blog post points out, in a context of limited resources, this is often framed as a trade-off, with organizations and causes forced to choose between long-term changes in how people view their issues and more immediate (although still, often, long-term) gains in the structures that govern our lives.

Where I come down, then, isn’t so much that we should be doing one and not the other.

We need marriage equality, in law, and we also need to celebrate cultures of inclusion and equity. We need strong childcare supports for working mothers, and we also need new cultural agreements about the role of women in society. We need well-funded public schools and a commitment to the public sphere. We need workable gun laws and a culture of nonviolence.

Yes, and yes, and yes.

I think the bigger question is where we should be intentionally focusing our energies, which comes down to what we see as the causal chain.

Do we view policy change as creating the conditions in which culture change is more likely to happen–desegregation leads to greater racial understanding, stricter DUI laws lead to new social norms about drinking and driving?

Or do we believe that we have to change how people think before we can expect to win changes in the law?

Where’s our target, and, then, how do we craft our strategies accordingly?

What’s going to get us there, most surely, given our shoestring capacities and the odds we face?

What’s the right goal and the right metric to go along with it?

Social indicators and social change

ruler_0_10

I love it when I find something, online or in a journal, and I think, “THAT is what I’m going to show to my students!

Especially if I know that it’s going to give me license to say (or at least think in my head), “I was right!”

Every year, my advanced policy students have to do a social problem and social indicator paper. They like the social problem piece just fine; it’s a pretty standard problem analysis and, certainly, there is no shortage of interesting social problems they can study.

But the social indicator piece usually trips them up, because I ask them to really think about how we know what we think we know about a given problem and that, well, gets a little confusing.

I prod them to think about the ways in which the definitions and measurements we use to understand social problems distort them, and how those distortions can be problematic when it comes to trying to solve the problems. I use the example of unemployment, often, to get them thinking about how our definition of ‘unemployed’ (not working and actively looking for work) doesn’t capture nearly all of those who would consider themselves ‘unemployed’. The same is true, certainly, for our definition of ‘homeless’. Many of those technically defined as ‘obese’ today don’t consider themselves such. And we could go on and on. There are areas where we don’t track nearly the entire scope of a problem (child abuse and sexual assault are particularly under-captured), and other problems that we don’t try to measure at all, really (until fairly recently, we didn’t measure asset poverty, for example, or wealth inequality).

And what we measure matters, I tell them, so, together, we study not only what we know about the problem, but what we really should know, in order to have the best chance of harnessing our social policies to fix it.

Enter Beth Kanter’s post about social media within nonprofit organizations, where she makes the point that, when it comes to metrics of engagement and reach of social media efforts, “what gets measured gets better”.

When organizations see, visually, that their emails are mostly going unopened or their advocacy alerts result in bounce-offs their website, they tend to be motivated to do something about it. When they see that their Facebook connections have been flat for months, they institute strategies to improve.

Measuring matters.

Which is the whole point of the social indicator assignment, and of my stressing to students that we have to pay attention to what we’re measuring–and how–and what we’re not, because that understanding (and lack thereof) is key to why we are and are not comparatively successful in solving the problem.

If what gets measured gets better, what should we be measuring–or, at least, measuring better–to give ourselves the best tools with which to combat the problem? How can pushing for data, sometimes, be the catalyst for bringing about change (think about progress around racially-motivated policing practices)?

And what should we be measuring, within our organizations (client satisfaction, recidivism, impact), in order to model what we want to see in social policy and to hone in on the areas of our own work that need improvement?

What gets measured gets…better. So let’s get measuring.

Advocacy Evaluation and Being ‘Data-Informed’

I wrote a post not too long ago about ‘data-driven cultures’. And then I read Measuring the Networked Nonprofit, and, in just a chapter, Beth Kanter and her co-author changed, somewhat, how I talk about the role of data in nonprofit organizations.

Social services aren’t ever going to be totally ‘data-driven’. There are a lot of factors that impact our decisions and our programming.

And that’s how it should be.

Rather than trying to make social workers slaves to spreadsheets, or pretending that we can make rational every factor that influences our operations, we need to become data-informed organizations, embracing both the power of data and its limits.

As Kanter advises, we need to spend a lot more time thinking about the data we collect than we do collecting it. As I see in the advocacy evaluation collaborative of which I’m a part, we need to find ways to unobtrusively gather data–weaving that into the work as much as possible–so that we have time to sit around and talk about what this means (which, in some cases, is how we ‘analyze’).

We need to resist the temptation to dump data on someone’s desk, thinking that our work is done when the report is published. I ask my clients, from the very beginning, what it is that they hope to learn from a given evaluation effort, what questions we need to ask to figure that out, and with whom they need to share the answers they glean. We plan for usefulness from the start.

It makes me think about an organization I have worked with over the past 18 months or so, which has a Quality Improvement Department–staffed with just a few full-time employees, whose job it is to cull through the organization’s data, looking for patterns and making sense of what they see, and also to systematically share information with others within the organization, so that, together, they can ask the most important question:

“So what?”

But this distinction between being data-driven and data-informed has special importance in advocacy, I think. We’re always exhorted, in advocacy, to have ‘hard facts’, as though the stories we share about policy impact are somehow too soft and squishy to be meaningful.

But the best nonprofit advocates already know that the most powerful advocacy comes from weaving data and narrative, from analyzing numbers to answer hard questions, and from relying on all kinds of knowledge to inform our decisions.

In advocacy, we know that being ‘data-driven’ can lead to outcomes that don’t work for individuals who don’t fit a typical pattern. We know that data don’t change hearts and minds, and that developing power requires creating spaces for people’s voices.

We know that we must be data-informed.

And driven by a vision.

The more we know…

I wrote pretty recently about the benefits of doing advocacy evaluation, for advocates. Instead of viewing evaluation as a chore to be suffered through–for the sake of funders or others trying to hold nonprofits ‘accountable’–we should view it, correctly, as an opportunity to learn more, hopefully in real-time, about whether what we’re doing is working, how we could get better results, and where to focus our limited resources.

I believe that.

It’s why my eyes light up when I help a nonprofit safety-net dental clinic, working to bring affordable, quality health care to rural Kansas, understand how conducting a policymaker rating as part of their advocacy evaluation can help them figure out where their potential allies are and compare how different messages are moving their targeted elected officials.

But something from Measuring the Networked Nonprofit got me thinking even a little bit differently about how to use advocacy evaluation for our own, internal purposes.

Because measurement can make the case for advocacy work within our organizations, to get the power, resources, and attention we need. And deserve.

If advocacy evaluation can show that our campaigns, and our presence in the public dialogue, raise awareness about the organization, Board members who worry about the ‘negative publicity’ from advocacy might reconsider.

If advocacy evaluation can demonstrate that clients who engage in advocacy have stronger attachment to the organization overall, direct-service practitioners may prioritize advocacy work more as part of their own work days.

If advocacy evaluation makes the case that advocacy contributes to (we don’t have to prove attribution here) stronger partnerships with agency allies, then there might be money for advocacy functions as part of other departmental budgets.

I still believe in advocacy evaluation primarily in terms of the pursuit of knowledge.

There is so much we need to learn, and know, in order to work better. And win more.

But if we can also identify evaluation questions, and construct methods, that position us to advocate more effectively within our organizations…then advocacy evaluation just got even more valuable.

The more we know.

If we REALLY thought like a for-profit corporation

See? For-profit corporations get this.

It’s an axiom these days:

Nonprofit organizations should operate ‘more like a business’.

The people/donors/media/policymakers who advise this are seldom very specific about what this corporate approach would really look like for social service agencies.

I mean, it’s not like the for-profit world has a lock on efficiency or ‘good governance’, and certainly many nonprofit organizations can measure their impact on a scale more impressive than most businesses.

I think, too often, this exhortation to ‘run like a business’ is really code for, “we’re uncomfortable with the whole ‘social impact’ thing, and not really sure that we should collectively have a responsibility to [fill in the blank worthy cause], so…can’t you just ‘take care of that yourself’, like a business?”

And my obvious frustration with the ‘wash our hands of this’ approach, aside, this post is about one place where I’ll concede that nonprofits for sure have a lot to learn from the for-profit world:

We need to get much, much more comfortable with failure.

Instead of feeling that every grant report we submit has to be full of unqualified successes and ways in which we exceeded all expectations, we need room to acknowledge that something didn’t work. Maybe we know what we need to do differently next time, or maybe we’re not sure, and we need an investment of some wisdom–and space to grow it–to gain some perspectives so that we can try again.

Instead of feeling that every annual report has to gloss over our struggles in favor of shiny examples of victory, we need opportunities to come together with others, with whom we’re aimed towards the same goals, to figure out how to move beyond the obstacles that thwart us.

We need a research and development approach to evaluation, like that espoused by TCC Group, where we use evaluation to try out innovations, explore how to scale up promising pilots, and construct a framework that helps us to distill the most essential intervention elements, so that we can most efficiently get to the results we’re seeking.

We need support from those critical to our field–those who financially support us, those who volunteer with us, those who sanction our existence–to engage key stakeholders in the “process of “making meaning” from our findings”, so that best practices are really that, we have an honest dialogue about what worked and didn’t, and we can quickly make modifications needed for improvement, instead of letting subpar approaches languish, just because everyone’s too scared, or too polite, or too socialized to own up to our failures.

In the world of big business, the best-selling books are full of reminders that, to succeed on a big scale, you have to fail massively. Few industries face tasks as daunting as those with which nonprofits concern ourselves: preventing child abuse, ending homelessness, reducing child hunger, stopping suicide.

The world needs us to succeed.

And that means that we have to learn to risk failure.

Just like a business.