Tag Archives: evaluation

A better measure for a better system

How should we measure ‘well-being’?

One of my intellectual interests relates to how evaluation and social indicators can focus our collective attention on the problems that need to be addressed, setting better benchmarks toward which we should aspire.

And one of my great passions is around reducing political, economic, and social inequality, to build toward a more just future.

And, here, these two worlds align.

Because we need some better measures of how we’re doing.

I don’t mean the U.S. poverty line, although clearly that needs to be revamped.

But, here, I’m thinking more of the underlying issue, not poverty but what creates the conditions for it.

We need a better measure than Gross Domestic Product per capita, because, clearly, an increase in GDP doesn’t always translate to an increase in well-being.

Look at how much more we spend on incarceration today, which is tied to an increase in GDP, when it’s clear that people aren’t benefiting from that particular outlay.

We have the Gini coefficient, which measures inequality, although, perhaps not surprisingly, it doesn’t hold much cachet with policymakers or even pundits in the U.S.

Something like the 20/20 ratio, which compares how well the bottom 20% are doing, compared to the top 20%, would be even more helpful, I think.

Or the Hoover index, which calculates how much redistribution would be needed to achieve total equality.

I’m certainly no economist–or mathematician–but an indicator that could clearly indicate a person’s likelihood of leaving poverty, or leaving the bottom 20% or so, could, if inserted into our understanding about our economic system, help to crack the myth of ‘rags to riches’.

So why do we use GDP per capita, when it so clearly fails to capture so much of what we really need to know, and distorts so much of the picture?

There are better measures out there, and we certainly have the technical capacity to shift to them, or even to develop something else, if we really wanted.

I can only conclude that our stubborn clinging to something woefully inadequate has much to do with how we come out looking relatively good according to that measure, and pretty blatantly unequal according to others.

If we’re not winning, after all, we can always move the goalposts.

But I think that, while metrics are surely not everything, having better measures would really help.

You manage to what you measure, after all, and, if we had some consensus about what we were working toward, we’d at least have a shot at getting there.

Measuring Social Impact

tape-measure-007

The Stanford Social Innovation Review had a special series on measuring social impact this spring, full of so many terrific insights that it took me quite awhile to sift through all of the articles and, then, compose my thoughts at least somewhat, to post here.

I’d love to discuss any of the pieces, and I welcome your responses to my reactions, too.

Above all, I’m very glad to see this conversation within this sphere; if we’re not asking what our true impact is, we’re missing the only metric that really matters:

Are we making the difference we intend, and that so desperately needs to be made?

  • It is somewhat disturbing, really, that an article entitled, “Listening to Those who Matter Most, the Beneficiaries” even still needs to be written. The article highlights some promising beneficiary feedback initiatives around the world, giving detailed descriptions of how the perspectives of students in struggling schools and of patients in health care settings are being used to inform program innovations. It is my hope that the challenges outlined and the case made for the advantages that accrue when participants (I like this term better than ‘beneficiaries’) actively shape activities can both help to push public policy in this direction, too. Then we can really get to impact.
  • There is a brief outline of a larger academic paper centering on how to evaluate the effectiveness of civic engagement and advocacy efforts. Importantly, it incorporates multiple stakeholder perspectives, but I am still dissatisfied; it feels, to me, too much like asking about participant ‘satisfaction’, which may or may not be a good proxy for efficacy, even in the context of civic engagement (which, after all, is designed to foster feelings of good will within the community).
  • My advocacy evaluation work focuses on using evaluation to improve performance, but we are often constrained by the inadequacies of our evaluation approaches to capture the rather elusive nature of advocacy and social change activities. This dynamic, between measuring to improve and improving measurement, is the subject of one of the articles. It mostly summarizes a workshop session related to evaluation, but I appreciate the inclusion of several specific and innovative approaches. Sometimes we have to get a bit ‘meta’, stepping back from our work in order to invest in the capacity to perform it better.

The folks at SSIR have been leading the field on the question of how to really define ‘impact’, and so it’s not their oversight, but I do think that we, collectively, need to spend more time within our organizations, our profession, and our field really clarifying what impact means, and what it looks like, in order to ensure that we will, indeed, know it when we see it.

But maybe approaching it from this direction–how can we measure it, before we are necessarily sure what it is, should offer some appeal.

If one of the reasons we have excused ourselves from getting serious about setting the bar for ‘impact’ accurately has been that we don’t know how we will be able to know when we’ve reached it, then perhaps addressing the latter will light a fire under us for the former.

Are we aiming for the wrong goal? Culture change and social justice

One of the blogs I really enjoy, even though it’s very challenging, is White Courtesy Telephone. A post from their archives, which I recently found, has me thinking about cultural change efforts as essential to social and policy change, and what that understanding–that, to change the policies that impact our lives, we have to change how people feel, not just about those policies, but about the people we serve–would mean for the kind of advocacy campaigns I help organizations design and execute.

Do we need to make cultural change our goal, rather than policy change?

What kinds of strategies and inputs do we need to pull that off? And how well positioned are we to embark on that work, today?

This tension (not always that tense, but certainly there are currents there) is playing out today in the immigration policy world, where I still spend a fair amount of my time.

There are those who focus most of their efforts on promoting greater communication and mutual understanding between immigrants and others in the U.S. I have a ton of respect for their work and, indeed, I think that it can promote systems change (in schools, workplaces, local governments) directly connected to how immigrants experience social policies and, ultimately, to the quality of their lives.

And then there are those of us more explicitly focused on legislative change, in our state legislatures, where we’re mostly playing defense, and in Congress, where the ongoing battle for comprehensive immigration reform challenges our capacity.

And, really, it shouldn’t be ‘either/or’, of course.

We need better policies, yesterday.

And, to get there, we need to change the conversations about the issues we care about, and to engage and activate latent supporters by cultivating a culture of solidarity and a climate of urgency.

Totally.

But, as the blog post points out, in a context of limited resources, this is often framed as a trade-off, with organizations and causes forced to choose between long-term changes in how people view their issues and more immediate (although still, often, long-term) gains in the structures that govern our lives.

Where I come down, then, isn’t so much that we should be doing one and not the other.

We need marriage equality, in law, and we also need to celebrate cultures of inclusion and equity. We need strong childcare supports for working mothers, and we also need new cultural agreements about the role of women in society. We need well-funded public schools and a commitment to the public sphere. We need workable gun laws and a culture of nonviolence.

Yes, and yes, and yes.

I think the bigger question is where we should be intentionally focusing our energies, which comes down to what we see as the causal chain.

Do we view policy change as creating the conditions in which culture change is more likely to happen–desegregation leads to greater racial understanding, stricter DUI laws lead to new social norms about drinking and driving?

Or do we believe that we have to change how people think before we can expect to win changes in the law?

Where’s our target, and, then, how do we craft our strategies accordingly?

What’s going to get us there, most surely, given our shoestring capacities and the odds we face?

What’s the right goal and the right metric to go along with it?

Social indicators and social change

ruler_0_10

I love it when I find something, online or in a journal, and I think, “THAT is what I’m going to show to my students!

Especially if I know that it’s going to give me license to say (or at least think in my head), “I was right!”

Every year, my advanced policy students have to do a social problem and social indicator paper. They like the social problem piece just fine; it’s a pretty standard problem analysis and, certainly, there is no shortage of interesting social problems they can study.

But the social indicator piece usually trips them up, because I ask them to really think about how we know what we think we know about a given problem and that, well, gets a little confusing.

I prod them to think about the ways in which the definitions and measurements we use to understand social problems distort them, and how those distortions can be problematic when it comes to trying to solve the problems. I use the example of unemployment, often, to get them thinking about how our definition of ‘unemployed’ (not working and actively looking for work) doesn’t capture nearly all of those who would consider themselves ‘unemployed’. The same is true, certainly, for our definition of ‘homeless’. Many of those technically defined as ‘obese’ today don’t consider themselves such. And we could go on and on. There are areas where we don’t track nearly the entire scope of a problem (child abuse and sexual assault are particularly under-captured), and other problems that we don’t try to measure at all, really (until fairly recently, we didn’t measure asset poverty, for example, or wealth inequality).

And what we measure matters, I tell them, so, together, we study not only what we know about the problem, but what we really should know, in order to have the best chance of harnessing our social policies to fix it.

Enter Beth Kanter’s post about social media within nonprofit organizations, where she makes the point that, when it comes to metrics of engagement and reach of social media efforts, “what gets measured gets better”.

When organizations see, visually, that their emails are mostly going unopened or their advocacy alerts result in bounce-offs their website, they tend to be motivated to do something about it. When they see that their Facebook connections have been flat for months, they institute strategies to improve.

Measuring matters.

Which is the whole point of the social indicator assignment, and of my stressing to students that we have to pay attention to what we’re measuring–and how–and what we’re not, because that understanding (and lack thereof) is key to why we are and are not comparatively successful in solving the problem.

If what gets measured gets better, what should we be measuring–or, at least, measuring better–to give ourselves the best tools with which to combat the problem? How can pushing for data, sometimes, be the catalyst for bringing about change (think about progress around racially-motivated policing practices)?

And what should we be measuring, within our organizations (client satisfaction, recidivism, impact), in order to model what we want to see in social policy and to hone in on the areas of our own work that need improvement?

What gets measured gets…better. So let’s get measuring.

Advocacy Evaluation and Being ‘Data-Informed’

I wrote a post not too long ago about ‘data-driven cultures’. And then I read Measuring the Networked Nonprofit, and, in just a chapter, Beth Kanter and her co-author changed, somewhat, how I talk about the role of data in nonprofit organizations.

Social services aren’t ever going to be totally ‘data-driven’. There are a lot of factors that impact our decisions and our programming.

And that’s how it should be.

Rather than trying to make social workers slaves to spreadsheets, or pretending that we can make rational every factor that influences our operations, we need to become data-informed organizations, embracing both the power of data and its limits.

As Kanter advises, we need to spend a lot more time thinking about the data we collect than we do collecting it. As I see in the advocacy evaluation collaborative of which I’m a part, we need to find ways to unobtrusively gather data–weaving that into the work as much as possible–so that we have time to sit around and talk about what this means (which, in some cases, is how we ‘analyze’).

We need to resist the temptation to dump data on someone’s desk, thinking that our work is done when the report is published. I ask my clients, from the very beginning, what it is that they hope to learn from a given evaluation effort, what questions we need to ask to figure that out, and with whom they need to share the answers they glean. We plan for usefulness from the start.

It makes me think about an organization I have worked with over the past 18 months or so, which has a Quality Improvement Department–staffed with just a few full-time employees, whose job it is to cull through the organization’s data, looking for patterns and making sense of what they see, and also to systematically share information with others within the organization, so that, together, they can ask the most important question:

“So what?”

But this distinction between being data-driven and data-informed has special importance in advocacy, I think. We’re always exhorted, in advocacy, to have ‘hard facts’, as though the stories we share about policy impact are somehow too soft and squishy to be meaningful.

But the best nonprofit advocates already know that the most powerful advocacy comes from weaving data and narrative, from analyzing numbers to answer hard questions, and from relying on all kinds of knowledge to inform our decisions.

In advocacy, we know that being ‘data-driven’ can lead to outcomes that don’t work for individuals who don’t fit a typical pattern. We know that data don’t change hearts and minds, and that developing power requires creating spaces for people’s voices.

We know that we must be data-informed.

And driven by a vision.

The more we know…

I wrote pretty recently about the benefits of doing advocacy evaluation, for advocates. Instead of viewing evaluation as a chore to be suffered through–for the sake of funders or others trying to hold nonprofits ‘accountable’–we should view it, correctly, as an opportunity to learn more, hopefully in real-time, about whether what we’re doing is working, how we could get better results, and where to focus our limited resources.

I believe that.

It’s why my eyes light up when I help a nonprofit safety-net dental clinic, working to bring affordable, quality health care to rural Kansas, understand how conducting a policymaker rating as part of their advocacy evaluation can help them figure out where their potential allies are and compare how different messages are moving their targeted elected officials.

But something from Measuring the Networked Nonprofit got me thinking even a little bit differently about how to use advocacy evaluation for our own, internal purposes.

Because measurement can make the case for advocacy work within our organizations, to get the power, resources, and attention we need. And deserve.

If advocacy evaluation can show that our campaigns, and our presence in the public dialogue, raise awareness about the organization, Board members who worry about the ‘negative publicity’ from advocacy might reconsider.

If advocacy evaluation can demonstrate that clients who engage in advocacy have stronger attachment to the organization overall, direct-service practitioners may prioritize advocacy work more as part of their own work days.

If advocacy evaluation makes the case that advocacy contributes to (we don’t have to prove attribution here) stronger partnerships with agency allies, then there might be money for advocacy functions as part of other departmental budgets.

I still believe in advocacy evaluation primarily in terms of the pursuit of knowledge.

There is so much we need to learn, and know, in order to work better. And win more.

But if we can also identify evaluation questions, and construct methods, that position us to advocate more effectively within our organizations…then advocacy evaluation just got even more valuable.

The more we know.

If we REALLY thought like a for-profit corporation

See? For-profit corporations get this.

It’s an axiom these days:

Nonprofit organizations should operate ‘more like a business’.

The people/donors/media/policymakers who advise this are seldom very specific about what this corporate approach would really look like for social service agencies.

I mean, it’s not like the for-profit world has a lock on efficiency or ‘good governance’, and certainly many nonprofit organizations can measure their impact on a scale more impressive than most businesses.

I think, too often, this exhortation to ‘run like a business’ is really code for, “we’re uncomfortable with the whole ‘social impact’ thing, and not really sure that we should collectively have a responsibility to [fill in the blank worthy cause], so…can’t you just ‘take care of that yourself’, like a business?”

And my obvious frustration with the ‘wash our hands of this’ approach, aside, this post is about one place where I’ll concede that nonprofits for sure have a lot to learn from the for-profit world:

We need to get much, much more comfortable with failure.

Instead of feeling that every grant report we submit has to be full of unqualified successes and ways in which we exceeded all expectations, we need room to acknowledge that something didn’t work. Maybe we know what we need to do differently next time, or maybe we’re not sure, and we need an investment of some wisdom–and space to grow it–to gain some perspectives so that we can try again.

Instead of feeling that every annual report has to gloss over our struggles in favor of shiny examples of victory, we need opportunities to come together with others, with whom we’re aimed towards the same goals, to figure out how to move beyond the obstacles that thwart us.

We need a research and development approach to evaluation, like that espoused by TCC Group, where we use evaluation to try out innovations, explore how to scale up promising pilots, and construct a framework that helps us to distill the most essential intervention elements, so that we can most efficiently get to the results we’re seeking.

We need support from those critical to our field–those who financially support us, those who volunteer with us, those who sanction our existence–to engage key stakeholders in the “process of “making meaning” from our findings”, so that best practices are really that, we have an honest dialogue about what worked and didn’t, and we can quickly make modifications needed for improvement, instead of letting subpar approaches languish, just because everyone’s too scared, or too polite, or too socialized to own up to our failures.

In the world of big business, the best-selling books are full of reminders that, to succeed on a big scale, you have to fail massively. Few industries face tasks as daunting as those with which nonprofits concern ourselves: preventing child abuse, ending homelessness, reducing child hunger, stopping suicide.

The world needs us to succeed.

And that means that we have to learn to risk failure.

Just like a business.

Measuring the Networked Nonprofit

I recently read Beth Kanter’s new book (with coauthor Katie Delahaye Paine): Measuring the Networked Nonprofit.

For me, it was even richer in applicable content and nonprofit inspiration than The Networked Nonprofit, maybe because I am sort of an evaluation geek, or maybe because it’s exciting to see how the field is advancing, and how much more we know about how working in new ways can advance nonprofit missions.

I will be working some of my favorite pieces from the book into posts over the next several weeks–I have sticky notes with citations all over my desk at this point–but, here, in this season of giving, I have some key concepts from the book and an offer to give away the extra copy of it I bought, to one randomly-selected person who leaves a comment about an evaluation question to which they wish they had the answer, for their nonprofit organization’s work.

The best part about the book is the way that it simultaneously demystifies and exalts measurement and learning. Here, it is accessible and valued, integral, but not scary. While a lot of the tips and tools help people think about how to measure what they’re doing with social media, really, the evaluation approach is valuable far beyond that aspect of nonprofit operations.

To get you thinking about measurement within your organization, think about:

  • Networked nonprofits measure failure first. Failure is more interesting, in some ways, and studying it can yield tremendous insights, if we learn not to avert our eyes.
  • We should experiment. When was the last time you deliberately tried something out, within your nonprofit, to see how it would work? If we weren’t afraid to fail, and if we started with questions we want to learn, then we would. The book has some very specific questions to identify opportunities to experiment with research questions and methods. I’d love to hear what you try.
  • Key performance indicators are what really matters. We have to figure out what we really want to know, and measure that, with a laser focus and a blind eye to much of the rest. As the authors say, “likes on Facebook is not a victory–social change is.” We have to be careful not to confuse means with ends, and this book helped me with that very important lesson.
  • Knowing more can improve our quality of life and that elusive ‘balance’, if we use data to figure out what we should really prioritize, instead of trying to do everything that sounds like a good idea. What headline would you most like to read about your work? Aim at that, and, probably, other things aren’t really that important.
  • It really is awesome to learn what evaluation and measurement can teach us. As one of the nonprofit leaders quoted in the book said, it’s really fascinating to learn what is fascinating to other people. That’s what measurement can tell us. We should care how many people like our Facebook status or follow our blog or sign up for our action alerts, not because it’s innately interesting that they did those specific things, but because that tells us something really fundamental about what people care about, and what they’re willing to do about the things they care about.

There’s a cool peer learning site that accompanies the book, definitely worth checking out as you make initial forays into measuring your social reach and real impact. If you’re still skeptical, read the list on p. 45. If you’re still, still skeptical, read some of the inspiring case studies about organizations that get it, and how measurement is making a difference for them.

You’ll become a measurement geek, too.

Advocacy Evaluation and Data-Driven Cultures

A huge part of the advocacy evaluation collaborative I’m working on here in Kansas is, as I’ve discussed, the mental shift to get nonprofit organizations thinking about evaluation as something they do for them, instead of something driven by those who write the checks.

It’s the difference between asking, “what do we what to know?” and “what did they say we’re supposed to put in response to #3?”

For some organizations, this is welcomed with open arms.

They developed tools and systems a long time ago, to collect the data they need to answer the questions they need to ask, and they are thrilled that there might actually be ways to share–and be recognized for–these insights.

But, for others, there’s some hesitation here.

Maybe it’s, in part, lack of certainty about how to collect (and, more importantly, analyze) the data they need. Maybe it’s concern that they could become ‘slaves to data’, in a way that would somehow negate their practice wisdom or instincts about how to best approach a given policymaker. Maybe it’s a human resource concern, since very few nonprofits have individuals with either the skill set or the work schedule to make them comfortable with taking on data analysis duties.

Or, most likely given my conversations with organizations through this initiative, it’s sort of all of the above.

Some posts on Beth’s Blog relating to data and measurement and how nonprofit organizations can and should embrace metrics as tools for change offered, to me, a new insight:

Some organizations have ‘data cultures’.

And some don’t. At least not yet.

While most of the conversation, linked above, relates to social media metrics, the typology of organizations and their evolution towards ‘data embrace’ applies to advocacy evaluation–and general program evaluation–too.

In some organizations, staff are comfortable with experimentation, because they know that there will be opportunities to learn what’s working, and what’s not, and to adjust accordingly. They have established systems–like a Quality Improvement Department–that shepherd data collection and analysis throughout the organization. They encourage staff at all levels to ask questions about key indicators, and they include clients, too, in the process of interpreting findings and making sense of them in practice.

And, in some organizations, what data there are (maybe some program participation counts, or raw numbers on donations, or maybe website page hits), are sequestered in one part of the organization, usually towards the top, such that there’s no real conversation around information. These organizations don’t spend much time collecting data, but they spend even less really understanding them, and that’s a bigger concern.

In my work with advocacy evaluation, I’m trying to avoid, if possible, use of the term ‘data’ at all. I think it scares people, especially social work-types, so I try to substitute something more innocuous, like ‘information’, instead. We’re trying to find methods of data collection that fit with organizational work flow and complement their existing strengths.

But Beth’s posts made me realize that we need to attend to something else, too.

We have to connect evaluation to the organizational culture.

This could mean relating every evaluation question back to the organization’s mission. Or highlighting the organization’s key values and those that can be enhanced, in some way, by evaluation (innovation, maybe, or excellence). Or finding champions within the organizational structure who have significant informal influence, and asking them to spearhead the progression towards a data-driven or data-informed approach.

Because dealing with data can be difficult.

Evaluation isn’t easy.

If it’s counter-cultural, that just makes our jobs harder.

Framework for a strong foundation

About a month ago, I posted about this framework for advocacy from the inimitable Tanya Beer with the Center for Evaluation Innovation.

I have used it several times since, to talk with organizations about how they conceptualize their advocacy–the targets towards which they are directed, the tactics they deploy, and the outcomes they can expect.

This week, then, I have three different (for me–hopefully they’ll translate for some of you!) epiphanies connected to this framework, one of which I owe directly to my good friend and awesome organizer Jake Lowen, of Kansas Grassroots.

When Jake and I were discussing this framework with some organizations, someone asked whether it is like a menu that organizations can choose from.

After listening and thinking (both things he does very well), Jake responded with a reference to (seriously) an obscure book about architectural theory, relating to the idea that, theoretically, the only factor that limits how tall a skyscraper can be is how wide you can build the base. He suggested, then, that we think about the different elements of this framework, or of our advocacy efforts, not as discrete items to be selected from a menu and cobbled together, but, instead, as bricks in a strong foundation. In this analogy, then, all of the ways in which we advance our issues–policy and research, community organizing, champion development–are essential, although we might emphasize one or another at different points.

And, extending this architectural reference to the current political environment, in Kansas and many parts of the country, our discussion raised the question that, since we essentially have a need for taller buildings today–the ‘shortcuts’ that were more possible in advocacy when the climate was more favorable and we had more champions on the inside to carry our messages–we need, then, an even stronger foundation.

Hence, more bricks.

That means, in applied advocacy language (since my knowledge of architectural theory is now, officially, exhausted), I think this means that we have to stretch ourselves into areas of the framework that might be less comfortable for us, in order to weather the storms that are undeniably part of the advocacy reality today. In some cases, we might be slowly approaching from one corner, in order to ease into policy change. In other cases, we might be surrounding our decision maker targets with information, public will, and pressure from influentials.

In essence, while we can’t ignore the poetic necessity to, sometimes, just speak truth to power and bang our heads against brick walls, there are often ways over and around the obstacles that we confront in one quadrant…

If we are nimble enough to build in another.

Where, and how, are you building advocacy strategies in this political reality that might differ from years past? How are these other efforts complementing your direct lobbying? How do they build you a stronger foundation?