Dodging futility: USING Community Needs Assessments
One of my contracts this year has been to conduct a community needs assessment for a consortium of nonprofit social service organizations in a community near where I live. There is a lot about the project that has been rewarding for me; I get a kick out of statistical analysis and probing to see what data can tell us.
But I’m committed to making sure that my consulting practice is way more about meeting the needs of the organizations and communities I serve than it is about satisfying my own intellectual curiosities. So I’ve spent a lot of time thinking about how to make this process really work for the organizations and their constituencies, and I’ve been reflecting over the past few weeks about what I’ve learned, and about what lessons those experiences might hold for others undertaking community needs assessments. Unless your history with needs assessments has been much different than mine, you’ve seen how they can sometimes be exercises in futility–things we have to do because some grant requires them, or things we do because we’re not sure where else to start, but things that end up being a whole lot of input and not much in terms of insight.
And we were intent on avoiding that.
It’s certainly too soon to tell exactly how successful we’ve been, really. The true test of the impact of this or any research endeavor will be in how people change what they do to respond to what they now know, and, while we’re seeing some evidence of that, the real measure will be over the next few years. But I think it has been a better-than-average effort that avoided some of the common mistakes. Here’s my list of what made some difference:
Involve participating organizations in crafting the questions. In some cases, this meant taking some of my $100 words out of the instrument (we field-tested all of the items). But, more than wordsmithing, we solicited ideas from organizations about the kinds of questions to include–what do they wish they knew about the people they serve? What information would help them plan services? What do their donors want to know? This not only improved the quality of the information we collected, but it also helped the process, by engaging organizations more in the work.
Turn results around quickly. Too often, we ask service providers to participate in research and then deliver them data 18 months later. That’s a timeline that works in academia (where I spend half of my working life), but it doesn’t work in the field. At all. So, we committed to a timeline that delivered analysis quickly. Yes, it meant that I did a lot of data entry on the weekends (A LOT), but I’d rather work really hard to turn around information that people can use than work pretty hard and deliver something that has lost its relevance. We got preliminary results to nonprofit partners within about 4 weeks of the end of the data collection period.
Plan for dissemination from the beginning. We scheduled a community meeting to share the results before we even started to collect data. We included, in an online survey instrument that was completed by more than 500 social service staff and community stakeholders, questions about the formats in which they would most like to receive information resulting from this assessment. And we developed personalized materials for each agency that highlighted the data in which they were most interested, in formats that they said would work for them. Honestly, this didn’t take a lot more work than producing one standard report–it just required planning for it from the start.
Cast a wide net. One of the points of analysis that most fascinated me was the discrepancy, in many cases, between what service providers and other “experts” viewed as the most pressing needs for the community and what those reportedly experiencing those needs were really living. In order to test this more fully, we asked many of the same measures about trends in need over the past 12 months, and about the single greatest priority in the community, to both the sample of organizational leaders and to clients of the group of nonprofits. At first, some were skeptical about both aspects of the design: we had some of the traditional push back that “clients won’t want to fill out the survey” and raised eyebrows about whether United Way donors, school district personnel, and government employees were really invested enough in their communities to participate meaningfully. We ended up with a sample of more than 1300 respondents, not maybe as large as my research training would hope but large enough to provide some new guidance in these areas, and we were able to pinpoint places of divergence between conventional wisdom and lived reality: in particular, clients saw their situations as far more stable, if still undesirable, than did the larger community sample of respondents, and they were much less likely to focus attention on their own particular need/niche, as a community priority, than were representatives of that particular constituency (so a parent with young children in need of childcare was more aware of how broader job creation strategies were essential than an employee of an early childhood education organization, who tended to focus more narrowly on that service). We couldn’t have learned this without thinking a bit more loosely about who our “community” is, and who should have a voice.
Process matters. I already knew, from my participatory research experiences, that how we ask people to participate in research makes a huge difference for the response (and, then, the ultimate product) we get. Because this community needs assessment involved the participation of many different agencies (and we had relatively little control over how they actually administered the survey, despite our instructions) it ended up providing some rich data for a process evaluation. We found, not surprisingly, that organizations that explained to clients what the assessment was, how it would be used, and how they could access the subsequent results, had far greater participation than those that took participation for granted or, even, implied some coercion. People will share information about their lives, even if it’s sensitive, if they think that it will advance efforts to meet their needs and the needs of others. Otherwise, they’d rather not. Respecting those who share themselves with us, as clients and as research participants, is not just ethical practice, it’s good methodology, too.
I’d love to hear from others who have conducted community needs assessments about what worked for you–how were your data used, and what did you do to increase their relevance? What lessons can you share about what to do (or not)? What should be the goals of community needs assessments, and how can we structure the processes so these goals are met?