Tag Archives: research

Fearlessness and Humility: Assets in Inquiry and Advocacy

Just when you thought you were done with cholera.

Almost, I promise.

There is one more passage, describing the way that Dr. John Snow worked, that I just really want to share. I’ll quote it at some length:

“Here we have a man who had reached the very pinnacle of Victorian medial practice–attending on the queen of England with a procedure that he himself had pioneered–who was nonetheless willing to spend every spare moment away from his practice knocking on hundreds of doors in some of London’s most dangerous neighborhoods, seeking out specifically those houses that had been attacked by the most dread disease of the age. But without that tenacity, that fearlessness, without that readiness to leave behind the safety of professional success and royal patronage, and venture into the streets, his “grand experiment”…would have gone nowhere” (p. 108).

I spend quite a bit of time reflecting on what makes advocates succeed, sometimes because I’m looking for inspiration to share, and sometimes in the hope that there are specific pieces of advice to pass on.

And while I think that tenacity is widely-regarded as an essential quality in an advocate, because we suffer so many more setbacks than victories, these other aspects of Snow–his fearlessness and his willingness to disregard and even endanger the professional reputation he had built–were just as important. For him, and for us.

Most of the time, our advocacy requires that we convince people to do something different, or at least differently. That means that we have to be willing to be wrong, even spectacularly so, or else we’re probably not reaching far enough. We have to ask questions to which we don’t know the answers. We have to be willing to reach beyond the realm of what we know we do well–direct service, program administration, supervision–and do something that we fear we might not be as good at, because that’s where we are needed.

We have to be not just tenacious, which could be accomplished by doing the same thing over and over again, but also fearless, ready to take on bigger risks or try less-sure things. We have to be fearless for our own sake and also for those we hope to inspire; Snow only got other public health leaders to investigate cholera at its source by first going in himself.

What else would you add to the list of imperative advocate characteristics? What does fearlessness and humility look like in your social change work?

Close knowledge makes a difference

There was another part from The Ghost Map that made me think about social work, and about you all, which means that it ends up here.

So, yes, just a little more cholera.

See, the doctor who ended up tracing the spread of the disease, and documenting the outbreak in a way that gave needed credibility to germ theory and ultimately brought down the idea of ‘miasma’ (smell=disease), was from the neighborhood.

He lived near Broad Street, where the pump contaminated with cholera was located, and that intimate knowledge was essential to helping him untangle the truth.

At the time, remember, most people thought that, since smell brought disease, dirty houses (read: poor people) would have the most illness, because they would smell bad. There were many low-income households in and around the area infected with cholera, and, so, most of the ‘outside experts’ were quick to conclude that it was their poverty, and the smells associated with it, that were quite literally killing them.

But John Snow knew better.

He knew of wealthier households living next to poorer ones, where both fell ill. He knew of very poor households that nonetheless maintained immaculately clean homes. He knew that most of the stereotypes were flawed. He knew that people were dying–real people, with grieving families–because he knew many of those afflicted.

This knowledge meant that he couldn’t fall back on the prevailing wisdom or the platitudes about poverty and disease. He could see facts more clearly, and his inquiry had an urgency stemming from his investment in the community and its suffering people.

And that, I believe, has lessons for social work advocates, too.

I believe that we can work effectively across communities, and that skills and relationships and real empathy are just as important as ‘matching’ membership on specific criteria.

But I also believe that it might be easier to miss things, nuances that really matter, if we see a community more as monolithic, which we’re more likely to do if we’re not embedded in it. I believe that too much distance can render us less effective, less committed, and, ultimately, less likely to succeed.

That’s one of the reasons that social workers make great organizers, and great advocates–we’re on the ground and we know how these issues work and we tend to notice details. We know and care about our work, and that matters for how we engage with it.

In history and still today, being close to the truth makes it more likely we find it.

Blind Spots and Grave Errors: Why do we think we’re immune today?

My oldest son is prone to getting really (REALLY) into something, for a brief period of time, and then moving quickly on. As parents, we try to keep up, encouraging his inquiry and trying not to reel too much when he abandons one topic for another.

For awhile, this winter, it was cholera.

As in, specifically, the cholera outbreak in Victorian London, and its contributions to the study of epidemiology and the development of modern sanitation.

He made a ‘ghost map’ showing how a cold outbreak could travel through his school, modeled after the map that Dr. John Snow made to finally prove that cholera resulted from contaminated water and not from bad smells.

And I, to make sure I could understand what he was talking about (and because most of his interests are, actually, quite interesting), read The Ghost Map myself.

And one part that stuck with me was how absolutely certain the best minds of the day were, at the time, that the deadly diseases they confronted must come from the smells of the sewers and of the decay with which they were surrounded. It made so much sense. London smelled really bad, according to almost all contemporary sources, and people were frequently ill, so, then, it made sense that the two would be related. They kept on believing this, even when houses with worse sanitation suffered lower death rates than the richer houses that happened to be downstream. They believed it because it seemed so right, even when data suggested that it wasn’t. At all. They believed it even when believing meant studiously ignoring countervailing facts, and even when believing one way led to behaviors significantly more likely to result in their deaths. They took clear action based on these flawed beliefs, never apologizing for or even seeming to doubt the veracity of beliefs based on no sound science at all.

The author asks, and we must ask ourselves, “How could so many intelligent people be so grievously wrong for such an extended period of time? How could they ignore so much overwhelming evidence that contradicted their most basic theories? These questions, too, deserve their own discipline–the sociology of error” (15).

Because, of course, this wasn’t the first time in history when powerful beliefs that defy truth have led to grave errors. During one outbreak of plague, a belief that the disease was spread by dogs and cats led to mass extermination which, of course, increased the plague, since it was actually spread by rats formerly kept in check by the dogs and cats (120).

And it wasn’t the last.

We have, with a greater or lesser degree of consensus, believed that interning Japanese-Americans would keep us safer; that cigarettes have no ill health effects; that people with mental illnesses belong in institutions; that nuclear power is infallibly safe…

We console ourselves that that was then, before we knew, because we don’t want to contemplate the very finite limits of the knowledge we have today.

And that’s our blind spot, this idea that we could be just as wrong now, about something else, as we recognize in hindsight. We could be ignoring just as many warning signs, about what’s wrong with our economic structure, or what it will take to really make schools work, or what supports young families need to thrive.

We could be just as wrong. And the consequences could be just as tragic.

If we don’t keep asking, why? And wondering, maybe?

It’s only 4 months until spring school board elections!

Yes, I know, a lot of people are still recovering from the 2012 Presidential election. People who watch television tell me that it’s really nice to be able to do so without relentless political advertisements.


I’m thinking about our local and school board elections, set for the beginning of April (just 4 months from now!), and about how, especially in these smaller races that don’t receive nearly the same media attention, the ways in which we communicate about the issues, and the candidates, and the importance of voting are even more critical.

And that got me thinking back to a study in the journal Nature (which always makes me think about the time my friend Tim had a paper published in Science, and told me that all the best journals have just one name, I guess kind of like Madonna?), about the impact of social media posts on people’s political activities and even their opinions. The big-time science-y types who get published in Nature did a study that included everyone who visited Facebook in the U.S., ages 18 and older, on Election Day 2010 (61 million adults). They found that political messages in a social context influenced not just users but also other friends who also saw them. Critically, the effect of the social transmission–the fact that the messages were delivered through a social network–mattered more than the content of the messages themselves. If we see those patterns hold up in future elections, you just may be saved some of those political television ads in the future.

For methodology types, here’s a little more detail on how it worked.

Most Facebook users that day saw a “social message”, encouraging them to vote. It gave them a link to local polling places, and clickable button that said “I voted”. They could see how many people had clicked the button on a counter, and which of their friends had done so. But the remaining 2 percent saw something different. Half of them saw everything the same except WITHOUT the pictures of their friends–the information, but without the ‘social’. The other half saw nothing. When they compared the three groups, in such a large sample size, the scientists found that the messages mobilized people to express their desire to vote by clicking the button, and the social ones even spurred some to vote. These effects rippled through the network, affecting not just friends, but friends of friends. (Best part alert): By linking the accounts to actual voting records, they estimated that tens of thousands of votes eventually cast during the election were generated by this single Facebook message. It was an increase of 0.39% in voting probability, just by seeing the social message. As the analysis of the study cited, “Facts only mattered when paired with social pressure.” Furthermore, when they crunched the ‘friends’ into more precise types–close friends, with whom Facebook users interact frequently, versus the more ‘regular’ connections with whom one might not have much (or any) face-to-face interaction, they found that the size of effects varied as one might expect. The more distant ‘friends’ influenced the odds that someone clicked the “I voted” button, but not the likelihood that a user investigated his/her polling place or went to vote.

Without the institutional subscription that I enjoy, you won’t be able to read the whole article, so here are the pretty cool points:

  • Nearly all the transmission occurred between ‘close friends’ who were more likely to have a face-to-face relationship, and the effects were strongest there. It makes sense–I may get annoyed when my neighbor or my cousin post political content that I don’t agree with, but I don’t/can’t walk away from them. And if someone I respect points me towards information of which I am skeptical, it makes me think twice.
  • The effects weren’t just on expression–what people posted themselves–but also on information-gathering (who goes to look for what information) and actual voter turnout. Those latter effects were more modest, but, still, with some of the razor-thin election margins we’ve seen recently, even small effects matter.
  • The messenger matters–we know that it’s not just the quality of one’s information, but also the trustworthiness and relational power of the person(s) delivering it.
  • Scale matters, A LOT. The messages themselves and the friends who shared their activity, collectively, accounted for about 0.14% of all the votes cast during the 2010 election. That’s more than 280,000 votes, from one Facebook message.
  • One of the coolest things, to me, about this study, is how ‘real’ it is. People didn’t know that they were part of an experiment. They were just doing what they do every day–spending some time on Facebook–and, in the process, shaping their own (and their friends’, and even their friends’ friends’) political behavior. The implications are significant.

And, again, this was for a mid-term congressional election that was, after all, a pretty big deal. Most people, arguably, knew it was happening. There were many other messages in the arena, about the same election.

What about those smaller elections, where, if we knew what our friends were doing and knew that they would know what we, in turn, were doing (or not), we could see, maybe a few dozen votes in an area that we normally wouldn’t, in elections with historically very poor turnout?

Maybe we need some experiments of our own, four months from now.

Studies in translation

One of my new projects is a bit of a different approach for me, more directly bridging my pseudo-academic pursuits and my applied advocacy practice.

I am working with the Assets in Education Initiative, an effort of the University of Kansas School of Social Welfare, to produce some materials and provide some advising, towards their aim of making their academic research ‘resonate’ more in the policy arena.

I have some thoughts, and some questions, as I approach these challenges, but, mainly, I want to start by pointing out what should, perhaps, be obvious:

It is super awesome that they are doing this.

How often do we academics (including the just pseudo-ones, like me) read academic literature–others’ or our own–and think, “this has profound implications for policy”, without, perhaps, giving much thought to the unlikelihood that any real policymakers or influencers will ever see it?

How often do we look at a policy and smack our fists against our foreheads, because OF COURSE it’s not going to work, given what we have learned about XYZ issue in the past 30 years of research. It’s like the state legislators haven’t even SEEN the research on the short tenures of stay on TANF and the importance of higher education as a work activity.

Because they haven’t.

My own students lament the fact that, having been trained to look to peer-reviewed literature for trustworthy information about best practices, connections between social problems and interventions (that can form the backdrop of a theory of change), and credible support for the changes they want to pursue…they then find, post-graduation, that they can’t even afford access to the journals on which we tell them to rely.

So I’m struck by the insight, and the humility, with which my new colleagues at AEDI are approaching this ‘next step’ in their work. They recognize that we have learned enough, in the past two decades, to know that helping low-income households accumulate assets can have significant impact on their behavior and even their thinking. They have been part of demonstrating the potential of these interventions through demonstration projects and numerous rigorous research efforts.

The next step:

Leveraging that base of knowledge, and the passions of those who have seen lives transformed through this asset-based approach to fighting poverty and reducing economic inequality, to win policy changes that can take these ideas to scale.

I hope this is just the first I’ve seen of a trend in academic researchers thinking hard about how to translate their ideas for policymakers, media consumption, and advocate empowerment. And, not just thinking about it, but dedicating resources, within their research budgets, to bridge that gap.

Some of the items we’ll undertake are already spelled out, but I am crowdsourcing this a bit, too, and I’d love to hear from those of you on the advocacy side–what do you wish you had, in order to carry academic studies that you find promising to a policymaker audience? And those who are researchers–where are your greatest challenges, in terms of figuring out ‘hooks’ to make your knowledge accessible by those in the policymaking arena?

Thank you, in advance, for your help in this decoding.

There’s no Rosetta Stone for this kind of translation for policy impact.

Mission Essential: Nonprofits Vote

One of my favorite finds, in some of my research for this blog several months ago, is Nonprofit Vote, an organization dedicated to helping nonprofits do voter engagement work right. That means that they identify, support, and applaud efforts that are sustainable, integrated, mission-consistent, and, most of all, impactful.

As we tick down to one year until one of the most important elections I can remember (and, yes, I do kind of say that about most of them!), I’ve been reading through some of the case studies and empirical analyses of what makes a successful voter effort by a nonprofit organization, particularly with an eye towards models that work for social service agencies. Nonprofit Votes has hosted some webinars highlighting successes, and there are some lessons learned that are very much worth sharing.

  • Face-to-face contact is by far the most effective way to increase voter turnout (increasing turnout anywhere between 6-14%, depending on the population and the type of election), especially with underrepresented populations. Of course, making those face-to-face contacts with potential voters is very time-consuming and extremely expensive…unless you happen to see them on a regular basis anyway because, I don’t know, maybe they are your clients?
  • The particular study from which I’ve pulled these data was conducted with nonprofit social service agencies, working with a variety of constituencies, in Michigan, and it’s a scientifically rigorous examination of how agency-based voter engagement, specifically, impacts voter behavior. That means that they had random assignment to control and “treatment” groups, the latter defined here as one group at each agency that received a voter registration appeal only and one group that had more sustained communication around voting and its significance. Importantly, some of the participating agencies had NEVER done voter work with their clients before, which makes the results all the more promising, especially for those who might be (wrongly!) thinking that it’s too late for them to develop a 2012 strategy.
  • The key findings, the ones that I think are so exciting? Clients in both treatment groups had a higher likelihood of voting than those in the control group. The likelihood of voter turnout increases proportionally with the nonprofits’ level of voter engagement effort, so it really does pay to go beyond just putting up the “Please Vote” posters (probability of voting increased by about 9% with each contact). Clients in both treatment groups were not only more likely to vote, but also more likely to encourage their family and friends to vote, which means that the same “word-of-mouth” system on which we rely for referrals and health education and so many other critical functions works for encouraging civic participation, too, allowing nonprofits to expand their reach far beyond those they directly serve. Among all forms of voter assistance nonprofits provided, new voter registrations and voting reminders were the two forms of contact that make the biggest difference in increasing voter turnout.

    There’s nothing “magic” about these organizations, or about the people they serve. Your clients are likely just as responsive to thoughtful, targeted, sustained communication about voting and why it matters as these folks were, and your organization just as capable of integrating these activities into your work.

    In the world of social services, we devote considerable energy to emerging practices with success rates that are anything but guaranteed.

    We know that changing the face of the electorate in the United States will make a difference in the kind of hearing our concerns receive, and the kinds of public policy priorities that rise to the top of the agenda.

    And now we know something more about how to make that happen.

    And so we must.

  • What the new poverty data say about an old problem

    What they said...

    I’ve spent the last few weeks buried in the U.S. Census Bureau’s new website, trying not to be paralyzed by the fact that the poverty statistics represent, of course, actual people who are poor.

    A lot of them.

    There isn’t anything truly surprising in the new data; poverty has gotten worse–dramatically so, in some cases–with people of color and children, particularly those in single female-headed households, especially vulnerable.

    So, for me, reviewing these figures is not so much about gaining new insights, but about seizing an opportunity to focus our attention, once more, where it belongs–on how terribly our public policies are failing to effectively combat the scourge of poverty.

    Because we’re failing not in explicable or unpredictable ways; we’re failing with tragic routine, reflecting much more a failing of political will than of technical ability.

    And our failure is increasingly dangerous, as the numbers of people in poverty grow, and as we learn more about the lifelong effects of being poor.

    Here’s what we know about poverty in my state in 2011. Now, what should we do about it?

    • Between 2009 and 2010, 20,000 more Kansans were added to the poverty ranks, and the percentage of those living in poverty rose to 14.3%. Kansans of color were disproportionately represented among the poor, with 28.6% of African Americans, 29.7% of American Indians, and 25.4% of Latinos living below the official poverty line.
    • Children are especially suffering in the current economic picture; nationally, 22% of children were in poverty in 2010. In Kansas, an alarming 23.7% of children under age 18 were poor in 2010, up from 18% in 2009 , a devastating decline in the fortunes of our state’s youngest and most vulnerable.
    • The poverty rate “gap”, then, between older adults (65+) and children has grown. In 2010, only 7.7% of Kansas seniors were poor. This is a triumph of the social policy innovation we know now as Social Security retirement; without Social Security, the percentage of Kansas seniors living in poverty would rise to more than 40%.
    • Work is no longer a guaranteed path to economic security. In 2010, real median household income in Kansas was $46,229, almost 5% lower than Kansas’ 2007, pre-recession median ($48,497). 27.8% of single female-headed households with children under age 18 had a householder who worked and yet, still, the family fell into poverty . In 12% of cases, these mothers were working year-round, full-time without being able to pull their families from poverty status, testament to the strains of low-wage labor and the difficult economics facing single parents raising children, particularly when they also experience the wage discrimination that still plagues female employment.
    • Our current poverty measure’s woeful inadequacy makes these statistics all the more alarming; if we used a more realistic threshold (such as those used to determine eligibility for means-tested programs–usually more like 125% of poverty), for example, more than 45% of single female headed-households would have been poor in Kansas in 2010. Similarly, if we accurately defined and measured unemployment (as in, people who wish they were working but aren’t, instead of only those not so discouraged that they haven’t given up or involuntarily taken a part-time job instead), our unemployment rate would hover around 12%–frighteningly high.
    • Appallingly, poverty in Kansas seems to be increasingly more rapidly than in other parts of the country, despite a job market that, in some ways, has not been ravaged as severely as that of other regions. While our overall poverty rate was slightly lower than the national figure, Kansas saw higher rates of child poverty and poverty in single female-headed households in 2010, and higher rates of growth between 2009-2010 in several categories.

    We shouldn’t need new statistics to remind us that poverty is a dire and growing threat to community and individual well-being. We don’t need statistics to connect the dots about those we see living in homelessness, or our own coworkers’ concerns about their mortgage payments, or, even, our own fears about the precarious nature of our employment.

    But we can, and, indeed, we must, use the release of these new data on poverty and its shadow–the economic insecurity that is nearly ubiquitous in today’s economy–to dedicate ourselves anew to developing public policy structures and investments that harness our considerable powers to improve people’s lives, individually and in the aggregate.

    Because when the next set of poverty data is released, I want some surprises.

    Nonprofit Policy Forum: A peer-reviewed journal for geeks like me

    I know. It’s not every day that someone’s getting emotional about a peer-reviewed journal. I mean, who uses the term “peer-reviewed” in conversation, anyway?

    But, people.

    Put yourself in my shoes.

    This thing rocks.

    The Nonprofit Policy Forum is a pretty new journal, which, in today’s age of the declining significance of print media, is fairly significant itself.

    And its content is all available online, which is huge in the world of the peer-reviewed, since my former students find themselves abruptly excluded from academic literature as soon as their access to the university’s considerable subscription library expires.

    AND, it focuses on policy process and content, and how both affect and are affected by the nonprofit sector. In other words, giving greater official legitimacy to the study and practice of advocacy and policy change, by nonprofit organizations, as well as discussing emerging policy trends that impact how nonprofits operate.

    So, now you understand.

    In the first issue, which is the only journal I can remember ever reading in its entirety, is an article reporting that putting clients (here, “constituents”) on a nonprofit Board of Directors and increasing their participation in strategic decision-making significantly increases the intensity of the organization’s advocacy, just as receipt of government and foundation grants tends to decrease it.

    In other words: what we know to be true about the countervailing pressures that weigh on nonprofit organizations in the advocacy arena, confirmed empirically and actually citable. Oh, happy day!

    There’s also an interview with Ambassador Andrew Young, specifically discussing the effectiveness (and limitations thereof) nonprofit organizations in shaping policy and a conceptual paper outlining how foundations can approach their philanthropy with an eye towards transformation and systems change. And an article introducing the challenges related to the emergence of social businesses has particular relevance for social workers, who can struggle at times to find ways to practice ethically and effectively in these newer organizational models.

    I’m never one to pretend that academic journals make the world go ’round. Perhaps that’s part of why I’m so hard-pressed to find the time to submit to them?

    But, when sometimes I feel very much like an outlier in the world of academia, given my particular areas of interest, it is very affirming to find communities of like-minded souls, and to be able to turn to their ideas on which to build my own. The way that scholarship is supposed to work.

    Here’s to happy reading (and citing)!

    Dodging futility: USING Community Needs Assessments

    One of my contracts this year has been to conduct a community needs assessment for a consortium of nonprofit social service organizations in a community near where I live. There is a lot about the project that has been rewarding for me; I get a kick out of statistical analysis and probing to see what data can tell us.

    But I’m committed to making sure that my consulting practice is way more about meeting the needs of the organizations and communities I serve than it is about satisfying my own intellectual curiosities. So I’ve spent a lot of time thinking about how to make this process really work for the organizations and their constituencies, and I’ve been reflecting over the past few weeks about what I’ve learned, and about what lessons those experiences might hold for others undertaking community needs assessments. Unless your history with needs assessments has been much different than mine, you’ve seen how they can sometimes be exercises in futility–things we have to do because some grant requires them, or things we do because we’re not sure where else to start, but things that end up being a whole lot of input and not much in terms of insight.

    And we were intent on avoiding that.

    It’s certainly too soon to tell exactly how successful we’ve been, really. The true test of the impact of this or any research endeavor will be in how people change what they do to respond to what they now know, and, while we’re seeing some evidence of that, the real measure will be over the next few years. But I think it has been a better-than-average effort that avoided some of the common mistakes. Here’s my list of what made some difference:

  • Involve participating organizations in crafting the questions. In some cases, this meant taking some of my $100 words out of the instrument (we field-tested all of the items). But, more than wordsmithing, we solicited ideas from organizations about the kinds of questions to include–what do they wish they knew about the people they serve? What information would help them plan services? What do their donors want to know? This not only improved the quality of the information we collected, but it also helped the process, by engaging organizations more in the work.
  • Turn results around quickly. Too often, we ask service providers to participate in research and then deliver them data 18 months later. That’s a timeline that works in academia (where I spend half of my working life), but it doesn’t work in the field. At all. So, we committed to a timeline that delivered analysis quickly. Yes, it meant that I did a lot of data entry on the weekends (A LOT), but I’d rather work really hard to turn around information that people can use than work pretty hard and deliver something that has lost its relevance. We got preliminary results to nonprofit partners within about 4 weeks of the end of the data collection period.
  • Plan for dissemination from the beginning. We scheduled a community meeting to share the results before we even started to collect data. We included, in an online survey instrument that was completed by more than 500 social service staff and community stakeholders, questions about the formats in which they would most like to receive information resulting from this assessment. And we developed personalized materials for each agency that highlighted the data in which they were most interested, in formats that they said would work for them. Honestly, this didn’t take a lot more work than producing one standard report–it just required planning for it from the start.
  • Cast a wide net. One of the points of analysis that most fascinated me was the discrepancy, in many cases, between what service providers and other “experts” viewed as the most pressing needs for the community and what those reportedly experiencing those needs were really living. In order to test this more fully, we asked many of the same measures about trends in need over the past 12 months, and about the single greatest priority in the community, to both the sample of organizational leaders and to clients of the group of nonprofits. At first, some were skeptical about both aspects of the design: we had some of the traditional push back that “clients won’t want to fill out the survey” and raised eyebrows about whether United Way donors, school district personnel, and government employees were really invested enough in their communities to participate meaningfully. We ended up with a sample of more than 1300 respondents, not maybe as large as my research training would hope but large enough to provide some new guidance in these areas, and we were able to pinpoint places of divergence between conventional wisdom and lived reality: in particular, clients saw their situations as far more stable, if still undesirable, than did the larger community sample of respondents, and they were much less likely to focus attention on their own particular need/niche, as a community priority, than were representatives of that particular constituency (so a parent with young children in need of childcare was more aware of how broader job creation strategies were essential than an employee of an early childhood education organization, who tended to focus more narrowly on that service). We couldn’t have learned this without thinking a bit more loosely about who our “community” is, and who should have a voice.
  • Process matters. I already knew, from my participatory research experiences, that how we ask people to participate in research makes a huge difference for the response (and, then, the ultimate product) we get. Because this community needs assessment involved the participation of many different agencies (and we had relatively little control over how they actually administered the survey, despite our instructions) it ended up providing some rich data for a process evaluation. We found, not surprisingly, that organizations that explained to clients what the assessment was, how it would be used, and how they could access the subsequent results, had far greater participation than those that took participation for granted or, even, implied some coercion. People will share information about their lives, even if it’s sensitive, if they think that it will advance efforts to meet their needs and the needs of others. Otherwise, they’d rather not. Respecting those who share themselves with us, as clients and as research participants, is not just ethical practice, it’s good methodology, too.

    I’d love to hear from others who have conducted community needs assessments about what worked for you–how were your data used, and what did you do to increase their relevance? What lessons can you share about what to do (or not)? What should be the goals of community needs assessments, and how can we structure the processes so these goals are met?

  • Why statistics still matter

    I still believe that stories and real people are what change minds and motivate hearts. I’ve seen it so many times…in an argument, observers’ eyes glaze over when the combatants start to hurl data at each other, and if anyone ever mentions what a “multivariate regression analysis” has supposedly proven beyond (almost) (because you know that researchers never say never) any reasonable doubt, you’re sunk. They’ll never listen to another word again.

    But when it comes to seeking guideposts for our own work, deciding on policy priorities, allocating scarce resources, and determining the extent to which we’re reaching our goals, good statistics can be much more helpful than anecdotes (which, as powerful as they are, are really, really bad bases for policymaking–I found that out the hard way when I gave legislative testimony that included the story of a legally-present immigrant student who was still ineligible for instate tuition, and then the legislature wanted to amend the bill to only cover kids like him, which would have excluded the VAST majority of students who were targets of the measure).

    So what that means is that, while we must personalize our claims to social justice, in ways that compel right action, we also must be comfortable and skilled enough with statistics to keep those very compelling stories from leading us astray.

    An example from Super Freakonomics (I’m kind of getting hooked on these guys), and one from my consulting work, illustrate this, probably better than I have so far.

    The example from the freak-economists is truly crazy: apparently, an intoxicated walker is eight times more likely to die than an intoxicated driver. The authors certainly don’t suggest that the work of anti-drunk driving advocates has been in vain, but they do use data to convincingly redirect our attention to the dangers caused by excessive consumption of alcohol, period, not just the preoccupation with drunk driving. They point to other, less surprising, clashes between data and story–the high-profile death of a young boy from a shark attack, for example, when only 4 people are killed, worldwide, on average, by sharks each year.

    Here’s how they state what I’ve been trying to, about why statistics still matter, even in this digital storytelling age:

    “While there are exceptions to every rule, it’s also good to know the rule. In a complex word where people can be atypical in an infinite number of ways, there is great value in discovering the baseline. And knowing what happens on average is a good place to start. By so doing, we insulate ourselves from the tendency to build our thinking–our daily decisions, our laws, our governance–on exceptions and anomalies rather than on reality” (p. 14).

    Which brings me to my consulting work, where I’ve been working with some anti-poverty agencies around assessing needs in their community. One of the realities that is quickly apparent is that, many times, perceived needs diverge from actual needs, such that figuring out what actual needs really are (because, when we’re talking about the willingness of powerful entities to recognize and address them, perception matters so much) becomes very difficult.

    An example of this was the discussion in two communities about teenage pregnancy. In one area, teenage pregnancy featured prominently in key informants’ assertions about priority needs in the community and yet barely registered in the actual data about births to teenage mothers. Clearly, there was some reason that these different entities kept pointing to teenage pregnancy as an “alarming trend” (in the words of one respondent) when there was no trend that we could find, nor much cause for alarm. I’m not sure what all was driving that, but what is certain is that, if the community diverts resources from other efforts towards preventing teenage pregnancy, there may very well be an uptick in the incidence of those other social problems, some of which (in this community, lack of access to affordable, quality childcare, for example) may be, in any absolute terms, far more problematic.

    In the other community, as you might guess, the opposite dynamic is playing out. Despite staff at the agency recognizing that teenage pregnancy is nearly epidemic, and despite statistics that show that fully 50% of young women ages 13-19 have given birth, hardly anyone talked about teenage pregnancy or related issues in the needs assessment process. Here is an example of a community that must confront its statistics in order to have half a chance at effectively solving the problem.

    So, tell your stories, absolutely. Appeal to values, and connect with people’s hearts.

    But when you sit down to check yourself, make sure that you can find, understand, and face what those not-irrelevant averages are telling you, too.