MIT (9) Computation as a Policy Lens

Here I want to talk about something that is completely obvious to me but seems to not be obvious to everybody else. When I studied computational theory in computer science school, there was some question as to why we were doing so, and the reason given was that we needed to understand the bounds of the possible so we understood as engineers what classes of solutions were impossible and didn’t waste time attempting them. Although it’s been a while since I studied it, Turing-style computational theory is based on combinations of simple if-then-else propositional logic but it quickly is revealed that you can’t tell if a program will stop, which is called the Halting Problem, or what the shortest path is through even a medium complex set of stops, what is called the Traveling Salesman Problem.

When studying politics and policy at MIT, it seemed to me that the complex social systems we studied were far more complex, less structured, and less specified than computer programs, and yet policy makers and planners made all kinds of heroic assumptions that invariably came not to pass. This was especially true for radical theories and policies, and by that I don’t mean to treat socialist, communist, or social justice-oriented theories pejoratively; I mean that making large and significant changes to any system tends to generate unexpected results. Engineers therefore make only small system changes and checkpoint the system so they can always return to earlier states if something goes wrong. Policy makers cast aside such caution and always endeavor to make the biggest and most radical changes, and when policy objective are not achieved — which is almost always the case — they ensure there’s somebody to blame, like the people who gave them the data on which the policy is based. As they say, there are no policy failures, only intelligence failures.

One of the key qualities of human cognition is that it is bounded in measurable and predictable ways. As the Nobel Prize Winning economist Herbert Simon said, “People will not do what they cannot do,” and they cannot track large numbers of variables in a complex system — either technical or social — and make accurate predictions of how the complex system will behave in the future. High-performance computers generally and models and simulations (M&S) specifically are better at doing that. Finding ways to combine boundedly rational human cognition and computer-based M&S to improve policy is an ongoing area of research.

Ultimately, the accurate prediction of consequences is at the foundation of morality. As Jesus said, a tree is known by its fruits (Matthew 7:15-20 and Luke 6:43-45). As social systems become more complex, accurately judging their behavior and modifying them effectively and predictably will become increasingly important. The new field of computational social science has begun to apply computer-based techniques in the academy, but their application in applied policy areas has been comparatively slow.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s