I just found the Nobel note on Heckman and McFadden’s work. It’s a short 11 pg read. Heckman’s work on self-selection and McFadden’s on discrete choice are a bit like the pillars of structural econometrics, I studied the models in the past but I knew little about the context and details, so I’m putting it “on the shelf”.
I’m currently reading Charles Manski “Identification problems in the social sciences“. I’m only in the middle, so far, but I really like his approach so far. This may come as a surprise from someone who favor the structural approach. But I find his “set identification” proposal interesting in the sense of “how much can you learn from a sample without making any assumption on structure or distributional forms”. I feel the same about non-parametric econometrics. I see it two ways.
- In a positive way a “worst case”, meaning assumption free, identification, provides an improvement upon total ignorance based almost only in statical methods.
- In a negative, and from my point of view more interesting, sense, its shows “the limits of inference without theory”. Although Manski favors a “small assumption” approach that may seems at odds with the structural approach (some of his comments are reported here) I see it as a critique of any pretense of obtaining identification without assumptions even under experimental conditions. I liked a lot, for instance, his discussion of the mixing problem and the perry school project.
So, how does this fit with my preference for structural econometrics? I see it as complementary. You can start your analysis bounding the value of the estimates with non-parametric or set identification techniques. Then, proceed and make behavioral assumptions.
I remember when I was taking Dirk Krueger’s macro class, arguably one of the best teacher’s I’ve ever had. He said something like, unlike in microeconomic theory, you typically operate under conditions of generality. You try to obtain to obtain general conclusions assuming as little as possible about the functional form, the value of the parameters. In macro, the game is different; you make strong assumptions -say, about constante elasticity of substitution functional forms- in order to be able to obtain precise estimates which you can then test and compare with the data. (there is a good discussion of this about the equity premium puzzle in this book. This is my general view about models. You use a formal models to tell a story. If the model is good, if it’s assumption are “robust”, then the story is plausible.
But I want the assumption to be explicit. That’s why I like Manski -for emphasizing the role of assumption in inference- and structural identification -for making the assumptions explicit.