I’ve been re-reading Mary Morgan’s excellent book The World in the Model: How Economists Work and Think. It’s relevant to the debate that’s been happening on Twitter – sorry, X – about the way economic research has become ever-narrower and more technical. As Richard Baldwin put it: “We are getting more and more precise answers to less and less important questions.” I’m not even sure that the precision is real. But there is a mania for technique, whether econometric, RCTs, or ‘big data’ methods – and above all for ‘identification’. This means being statistically confident that outcomes can be causally attributed to potential drivers. The identification mania is so intense that I’ve kept a gobsmacking email rejecting an article on the grounds that it ‘wasn’t identified’, when it wasn’t trying to do a causal analysis at all.
The book documents the way ‘models’ have become the dominant way economists work, to the exclusion of other modes of analysis, and argues that this means two kinds of work are excluded: big picture and context-specific detail. The professionally high status work is all “middle level stuff”, wrapped in techniques and modes of thought that prove to incumbents that the work passes appropriate professional hurdles. The dangers are obvious: economics is too silent on big issues, too generic on detail, and extraordinarily conformist.
Morgan concludes: “[D]uring the 20th century, modelling became the way to do economics. The term ‘model’ changed from being a noun to being a verb. ….The epistemic genre of creating and reasoning with models requires a craft skill working with highly formal instruments. … [I]t comes to be thought to be the ‘right way’….” What’s more, economists are taking the requirement for modelling as the way of knowing into other domains such as social policy or everyday phenomena. “Now, when economists look at their small mathematical models they see the real word, and when they look at the big real world they see it as a sequence of their small models.”