I am currently reading a paper. It’s basically an empirical piece. The ‘theory’ it tries to test has the following structure: Things tend to be like A, which implies B, but then sometimes C, which is why we observe D, except if we observe E, then it does not hold any more because X believe F and…”
The operationalization of the empirical test is done a partition of the outcome space as a sequence of proposition. ‘Hypothesis 1 [Fancy name such as “self interested voter effect”]: Agent A believe S which is why we should observe B’ and then an informal discussion of the mechanism. Several hypotheses are enumerated. Then, the empirical test claims to find support for one of them.
I’m sure you have encountered papers like these. Empirical research is full of them. Surely, there is nothing that inevitably leads you to be wrong doing this. However, the style is painful, to read, and there are so many contingencies to keep track of. Eventually, the mechanisms discussed at each hypothesis are sometimes unidentified, even when endogeneity and stuff are taken care of, because ATE and LATE are about effect rather than mechanisms. Perhaps it is a matter of intellectual taste, but I can’t stand this sort of amateur-psychology-fed typology-building ad hoc style that are particularly common in in political behavior papers.
Something that was immediately obvious to me is that there would be no formal model in the paper -and I was right. I have the feeling that formal modeling induces parsimony, which translates into a certain clarity about what is being claimed. It imposes a certain penalty upon the complexity and, just to show the parallelism, it works as a “shrinkage method” on your theory, restraining your degrees of liberty.