One reason why experiments may work for drugs but not for policies

As anyone involved in research knows too well, the logic of scientific reasoning is almost never a matter of logic alone. Even when involved in hard quantitative science, there are many elements external to the logic of the (either formal or empirical) model that are nonetheless essential to the justification of the reasoning. These elements include of course (formal) “assumptions”, but also many other, much more informal devices that are needed to “tell a story” with the model, to interpret the parameters, to frame its scope, and so on. Models are ways of telling a certain story that must be seen as convincing in a precise way.

Most methodological philosophies are coupled with a narrative, a story, a rhetoric, a set of intuition pumps and images that bridges  the methodological protocol with our intuition to make the explanation convincing. Approaches to causal identification are not an exception. Structural models, for instance, relies on formal  models and the plausibility of the behavior they embody to motivate the story they tell.

In spite of its emphasis often on “theory free” data, the experimentalist school also relies on a narrative. Surely, the hard formal theoretical understanding is related to how randomizing the treatment overcomes the selection problem. However, I would argue that the analogy with the natural sciences and particularly with medicine, plays a significant role, at least as an intuition pump. All of us are reasonably familiar with how experiment work in the Science (with a capital S), as this image is part of our culture; so it is easy to transpose that logic into the social sciences. That is, is experiments are the main paradigm in the natural sciences; shouldn’t we imitate them?

All this talk about habits of thoughts and intuition pumps  may sound as a bunch of amateur sociology, but I believe it is indeed important to be aware of why we believe a certain objection does or doesn’t work: it actually took me a to write down some symbols to convince me that things were pretty different. This is, of course, our thought processes are led by these images and narratives that structure intuition and not only by cold logic.

Let’s go back to the setting of the previous piece:

y = f(x,z) = α+(β +π′z)x +ε

where:

  • y is some variable to be explained
  • x = [x_1,…,x_K] is a vector that fully characterizes the treatment.
  • z = [z_1,…,z_K] is a vector that fully characterizes the context and that may have both observable and unobservable elements.
  • ε is the error term, that is, all other unobservables that are independently distributed.

Duflo suggested that we could learn a lot from experiments because each combination of (y,x,z) could be seen as a data point and this could be a way to pursue external validity. Thinking of (y,x,z) this way, you may wonder how you can extrapolate, that is, how you can draw inference about y off the support of the sample. When f(x,z) varies smoothly with x and z, then you can plausibly proceed to this sort of out of sample inference whenever the values of interest [x(1), z(1) ] are close enough to those of other data points where we have observations. 

I’m no expert in medicine, but I feel, this is probably true to medicine. There is considerable room for accumulation of knowledge by replication, precisely because: Firstly there is a good control of the treatment they are administering: different doses, different compositions, etc… Thus treatments can be easily replicated and measured how they vary across cases. Secondly, context is likely to matter little; the human body is everywhere the same and the environment in which a virus evolves tends to be similar. Except, of course, when there are epidemiological effects. That is, for any case of interest, there are sufficiently many data points in its neighborhood to  infer from them; there sufficiently many similar enough cases from which to learn.

But this, I would argue, is possibly substantially different in the social sciences: the data-points are “too apart” from each other. to be able to draw credible inference from experimentation: experiments seem to to me be too context sensitive; treatments vary, etc. In my view, this has not to do with any essentialistic feature of social science. I see it as having to do with a) the cost or even impossibility in some cases, of performing experiments (which may limit the quantity of data points) and b) Measurement issues.

Why are measurement issues important? It has everything to do with how we “codify” the sample space. It is easy to control that we are administering the same drug to everyone; it’s a different matter to control unemployment insurance is homogeneous across countries.

Again, my view of the way ahead rejoins SC’s suggestion of unbundling specific features of treatments instead of just treatments across context. It is also where more theory driven approaches can play a role:

First, [a structural model] it matches observed past behavior with a theoretical model to recover fundamental parameters such as preferences and technology. Then, the theoretical model is used to predict the responses to possible environmental changes, including those that have never happened before, under the assumption that the parameters are unchanged.

In both cases, the empirical exercise is reduced to more fundamental, deeper, factors behind the causal mechanism, it helps to “reduce the dimensionality” of the space or make it  “more dense” and make cases more comparable to each other. I can not resist quoting James Heckman’s wonderful piece:

A model is a set of possible counterfactual worlds constructed under some rules. The rules may be the laws of physics, the consequences of utility maximization, or the rules governing social interactions, to take only three of many possible examples. A model is in the mind. As a consequence, causality is in the mind.

In order to be precise, counterfactual statements must be made within a precisely stated model. Ambiguity in model specification implies ambiguity in the definition of counterfactuals and hence of the notion of causality. The more complete the model of counter- factuals, the more precise the definition of causality. The ambiguity and controversy surrounding discussions of causal models are consequences of analysts wanting something for nothing: a definition of causality without a clearly articulated model of the phenomenon being described (i.e., a model of counterfactuals). They want to describe a phenomenon as being modeled ‘‘causally’’ without producing a clear model of how the phenomenon being described is generated or what mechanisms select the counterfactuals that are observed in hypothetical or real samples.

Advertisements
One reason why experiments may work for drugs but not for policies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s