A huge part of the advocacy evaluation collaborative I’m working on here in Kansas is, as I’ve discussed, the mental shift to get nonprofit organizations thinking about evaluation as something they do for them, instead of something driven by those who write the checks.
It’s the difference between asking, “what do we what to know?” and “what did they say we’re supposed to put in response to #3?”
For some organizations, this is welcomed with open arms.
They developed tools and systems a long time ago, to collect the data they need to answer the questions they need to ask, and they are thrilled that there might actually be ways to share–and be recognized for–these insights.
But, for others, there’s some hesitation here.
Maybe it’s, in part, lack of certainty about how to collect (and, more importantly, analyze) the data they need. Maybe it’s concern that they could become ‘slaves to data’, in a way that would somehow negate their practice wisdom or instincts about how to best approach a given policymaker. Maybe it’s a human resource concern, since very few nonprofits have individuals with either the skill set or the work schedule to make them comfortable with taking on data analysis duties.
Or, most likely given my conversations with organizations through this initiative, it’s sort of all of the above.
Some organizations have ‘data cultures’.
And some don’t. At least not yet.
While most of the conversation, linked above, relates to social media metrics, the typology of organizations and their evolution towards ‘data embrace’ applies to advocacy evaluation–and general program evaluation–too.
In some organizations, staff are comfortable with experimentation, because they know that there will be opportunities to learn what’s working, and what’s not, and to adjust accordingly. They have established systems–like a Quality Improvement Department–that shepherd data collection and analysis throughout the organization. They encourage staff at all levels to ask questions about key indicators, and they include clients, too, in the process of interpreting findings and making sense of them in practice.
And, in some organizations, what data there are (maybe some program participation counts, or raw numbers on donations, or maybe website page hits), are sequestered in one part of the organization, usually towards the top, such that there’s no real conversation around information. These organizations don’t spend much time collecting data, but they spend even less really understanding them, and that’s a bigger concern.
In my work with advocacy evaluation, I’m trying to avoid, if possible, use of the term ‘data’ at all. I think it scares people, especially social work-types, so I try to substitute something more innocuous, like ‘information’, instead. We’re trying to find methods of data collection that fit with organizational work flow and complement their existing strengths.
But Beth’s posts made me realize that we need to attend to something else, too.
We have to connect evaluation to the organizational culture.
This could mean relating every evaluation question back to the organization’s mission. Or highlighting the organization’s key values and those that can be enhanced, in some way, by evaluation (innovation, maybe, or excellence). Or finding champions within the organizational structure who have significant informal influence, and asking them to spearhead the progression towards a data-driven or data-informed approach.
Because dealing with data can be difficult.
Evaluation isn’t easy.
If it’s counter-cultural, that just makes our jobs harder.