I love it when I find something, online or in a journal, and I think, “THAT is what I’m going to show to my students!”
Especially if I know that it’s going to give me license to say (or at least think in my head), “I was right!”
Every year, my advanced policy students have to do a social problem and social indicator paper. They like the social problem piece just fine; it’s a pretty standard problem analysis and, certainly, there is no shortage of interesting social problems they can study.
But the social indicator piece usually trips them up, because I ask them to really think about how we know what we think we know about a given problem and that, well, gets a little confusing.
I prod them to think about the ways in which the definitions and measurements we use to understand social problems distort them, and how those distortions can be problematic when it comes to trying to solve the problems. I use the example of unemployment, often, to get them thinking about how our definition of ‘unemployed’ (not working and actively looking for work) doesn’t capture nearly all of those who would consider themselves ‘unemployed’. The same is true, certainly, for our definition of ‘homeless’. Many of those technically defined as ‘obese’ today don’t consider themselves such. And we could go on and on. There are areas where we don’t track nearly the entire scope of a problem (child abuse and sexual assault are particularly under-captured), and other problems that we don’t try to measure at all, really (until fairly recently, we didn’t measure asset poverty, for example, or wealth inequality).
And what we measure matters, I tell them, so, together, we study not only what we know about the problem, but what we really should know, in order to have the best chance of harnessing our social policies to fix it.
Enter Beth Kanter’s post about social media within nonprofit organizations, where she makes the point that, when it comes to metrics of engagement and reach of social media efforts, “what gets measured gets better”.
When organizations see, visually, that their emails are mostly going unopened or their advocacy alerts result in bounce-offs their website, they tend to be motivated to do something about it. When they see that their Facebook connections have been flat for months, they institute strategies to improve.
Which is the whole point of the social indicator assignment, and of my stressing to students that we have to pay attention to what we’re measuring–and how–and what we’re not, because that understanding (and lack thereof) is key to why we are and are not comparatively successful in solving the problem.
If what gets measured gets better, what should we be measuring–or, at least, measuring better–to give ourselves the best tools with which to combat the problem? How can pushing for data, sometimes, be the catalyst for bringing about change (think about progress around racially-motivated policing practices)?
And what should we be measuring, within our organizations (client satisfaction, recidivism, impact), in order to model what we want to see in social policy and to hone in on the areas of our own work that need improvement?
What gets measured gets…better. So let’s get measuring.