Software code as regulatory advocacy

Even with as much reading and thinking as I do about social work, social problems, and social change, there’s still something, every couple months or so, that completely blows my mind.

I love it when I come across something that makes me think, “of course!”

And, when it intersects with regulatory policy and advocacy…well, that’s just about perfect.

An essay by Gene Koo in Rebooting America brought together just such a trifecta. He writes about the increasing ubiquity of software code as a controlling influence in the implementation of social policy, and of the associated technical and political challenges in ensuring that computers don’t literally take over human judgment in critical areas of social welfare.

When you start to think about it, there are so many ways in which software is replacing the decision making even of powerful legislative and regulatory actors. Computer programs determine eligibility for public benefits, process appeals, and calculate compliance with program guidelines. Those functions are important for the overall functioning of a policy and, in the lives of an individual or family, they can be monumental.

Koo’s points about the ways in which software code shapes policy implementation mirror discussions elsewhere about the significance of regulations as the place where a policy’s intentions are translated into actual operations.

As with those rules, software code can reflect routine errors (the easiest thing to fix!) or, more perniciously, the development of what is called “codelaw” can, while not directly contradicting the law, reflect a particular implementation that isn’t the only way to construe the law. It’s in this case that software code essentially makes law, and it’s here that the same kinds of advocacy strategies we apply to the regulatory context–pointing out contradiction to legislative intent, illustrating pragmatic implementation hurdles, demonstrating the potential for inconsistent impact, and mobilizing key political and technical stakeholders–can make a real difference.

Koo’s discussion parallels that on this blog and elsewhere about the proper place for discretion in social policy. While software code can eliminate dangerously capricious decisions, which can be good governance, it also takes away the ability for trained personnel to include compassion in their way of carrying out legal mandates. This is the same tradeoff we contemplate with debates about how precisely to draw regulatory guidelines, too. Here, though, even more than with bureaucratic regulators, we’re trusting the critical task of “filling in the gaps” not to trained government employees, mostly committed to the programs they oversee, but to software developers, who, while technically expert, often have no substantive knowledge of the law nor accountability to the general public.

Kind of scary, really, especially since, while I can recognize a problematic regulation when I see it, I have no such dexterity when looking behind the curtain of a software program.

The strategies that Koo suggests for working within this new reality of “codelaw” also parallel those that work within the regulatory context. We should have a sort of “notice and comment period” when people can submit potentially tricky cases and see how the software code handles them, and our campaigns should identify software experts who can lend their expertise at this phase (a perfect opportunity for crowdsourcing!). He also recommends resisting the “baby and bathwater” reaction; we must recognize the potential to use software to ameliorate failings such as racism and sexism, which have certainly been endemic results of human judgment and discretion within our welfare systems.

Like so many of the new technological applications in the field of social work, then, the rise of codelaw is here to stay, so our challenge is to figure out how to make it work for us, and how to work within this framework to protect our legislative gains (and lessen the sting of our losses!), build relationships with those in power, and enhance the power of our constituencies.

Even when that means a string of characters is our advocacy target.

Where have you encountered “codelaw” in social policy? How have you worked, successfully or not, to advocate for kinder, gentler, software systems in these areas? What lessons have you learned in that advocacy?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s