I have spent the last several months trying to teach myself how to code. Most people in the social sciences who think about learning coding skills think about it as a tool for data analysis. That was also my motivation learning it. Yet, I want to allocate some space to explain why I think most people, I mean, anyone with a humanistic or scientific background, would enjoy learning it.
Back in 2008 I took a distance learning course in the Philosophy of Social Science. I was coming into the field of economics from outside- my previous background was in law and European affairs. It was the wake of the financial crisis and I was interacting at the time with many critics of the field who attacked its very basis from an epistemological point of view – game theory assumes implausible rationality, excessive simplification, etc. As such, I invested most of my effort in the course in answering those critiques and read a huge amount of philosophy of the behavioral sciences – the best book I read was this. In my little mind, I thought becoming knowledgeable in the deep foundations of the discipline would make me a better scientist, so I took an overdose of the philosophy of the mind.
Obviously, I ended up realizing I had overshooted and learned far too much highly interesting, but not so useful, not even quotable, stuff. Still, the interest for the topic remained, I acquired a taste for something I thought pretty much essential. I fell in love with Dan Dennett’s version of functionalism (a wonderful introduction to his philosophy is this little book).
From Dennett view of agents as “intentional systems” I retained a certain attitude to think about the world in a rather cynical and instrumental way. Dennett approaches the problem of the “manifest image” of the world just by assuming that thinking about minds as things with propositional attitudes (beliefs and desires), but somehow avoiding, at first, the very much unproductive ontological problem of whether those propositional attitudes exists or not. His view then proceeds to suggest that we should rethink our concept of existence not just as little things hitting each other.
This is the (highly linearized) intellectual path through which I became sympathetic towards the behaviorist attitude that is mainstream in economics and formal modeling social science. Instead of trying to go deep enough into the minds and hearts of people, as ethnographers (absolutely great and absorbing book I just finished) or social historians do, you start from the very other end: you look at behavior, try to capture systematic patterns and then model that pattern as a set of preferences modeled as utility function and constraints (this account of revealed preference theory is Don Ross’ based on Ken Binmore’s) BUT remaining agnostic about whether that approach is appropriate to study human behavior at other levels. This means: if you are a psychologist studying pathological behavior, you clearly DON’T want to take as your starting pointing that oversimplified view of how behavior works; yet, if you try to understand how this works at the aggregate level, this a sophisticated enough view of integrating the behavior of lower level individuals intro macro-social patterns.
This is a very powerful idea, one that should be integral to any critical understanding of the world: an intuitive understanding of how aggregation works. How simple, uncoordinated units can work interacting together making more complex behavior emerge. I sometimes feel that one of the reasons why public debate is so often disappointing is because we fail to understand the idea of aggregate and emergence. We fail to understand how men make their own history , how diffuses and impersonal causes interact producing unintended consequences; instead, we tend to attribute responsibilities and to see conspirations everywhere.
Here is where computer science comes. John Von Neumann allegedly said:
You insist there is something a machine can not do. If you tell me precisely what it is that a machine cannot do, then I can always make a machine that will make just that
Formal modelling and programming are, in essence, the same thing performed in a different ways: understanding aggregation and emergence. Programming is about teaching a computer, through a large number of small stupid steps, to perform something allegedly more complex. Formal modeling is about getting a link between parameters of interest, with a certain interpretation, and observable variables. Solving a model formally amounts to spell out the relationship between the variables of interest and the parameters in a “closed form” way. Solving it numerically, in turn, amounts to simulate the consequence, fit a “curve” that would have the appropriate shape for each value of the parameters -see how alternative parameters would produce alternative patterns. In both cases, you work as an engineer: you build a machine that will replicate the behavior of a system -whatever system.
How is this useful? When you program an interactive system -say, a role game- it may just look as if there was a person on the other side that is close to passing Turing’s test. Eventually, it may be a Zombi, but us Dennettians are happy with Zombies. You can, probably, learn a lot about how the mind might work by building artifact that act like minds, through aggregation of simple orders. That is, for me, the key lesson I learned reading Dan Dennett.
Just like Dennett’s philosophy of the mind, it shows you how aggregation works, how interaction between lower and higher level patterns (can) happen. Coding helps you to internalize a critical lesson of science, one that, I claim, everyone should eventually incorporate in his world view: how the magic of aggregation and emergence happens.