I had thought Joshua Angrist and Jörn-Steffen Pischke had reached the pinnacle of accomplishment when it came to econometrics texts with their Mostly Harmless Econometrics. That’s a fabulous, clear, practical manual – it’s so good that my eldest son, a recently-minted economist, has perma-borrowed my copy. What was particularly good about that book is its clarity about the importance of thinking through your null and alternative hypotheses – it’s one of my bugbears in life that people so rarely are clear about their counterfactual.
Yesterday I read – devoured – almost all of Mastering ‘Metrics: The Path From Cause to Effect, their follow-up textbook, covering more basic material at a level suitable for students meeting it for the first time – but also for practising economists who learned their econometrics long ago. Econometrics is one of the unsung fields of economics where there has been a stupendous amount of progress during the past 20 years, due to a mixture of more data, faster computers, better software – and improved econometric methodology. A lot of this methodological advance happened after I did my PhD (macro – yes, really! – and econometrics), so although I’ve picked up a lot of the material this book covers, I found it an incredibly illuminating read.
The perspective the book takes is how to answer questions about causality, and it presents five approaches: randomised trials, regression, IV/2SLS, regression discontinuity design, and differences in differences. Each chapter sets out an empirical question which is used to take the reader step by step through the methodology. The chapters mainly use verbal explanation, with a minimum of equations, and each has a more technical but still extremely clear appendix for students or practitioners needing that material. There are plenty of practical tips, for example, on how to interpret the size of coefficients, how to sense check results, how to check whether there might be omitted variables bias and what sign/size it might be. I love it that they say, your software programme will calculate this complicated standard error for you, no need to memorize the extremely complicated formula (I speak bitterly as one who wrote Fortran programmes to calculate the damn things, 30 years ago). Each of the examples reveals why simple, compelling data correlations of the kind discussed constantly in the world of policy can be completely misleading – or not.
Another nice feature is that each chapter ends with a couple of pages on the pioneers of statistical methods, explaining their contribution and the kinds of empirical problem they were innovating to be able to address.
It isn’t a perfect book. There’s a Kung Fu theme which is meant to make it more approachable and fun, but grated with me a bit. Still, it may be as close to perfection as you can get in this world to an introductory econometrics text. Ideal for students, ideal for older economists who privately admit they could do with brushing up their econometric knowledge a bit, as they look at the figures generated by their software packages. I’ll find this a very useful book, not least when it comes to reading other economists’ papers. It explains how to set about delivering on the huge promise of economics as a careful, empirical science, although of course there is no substitute for thinking carefully about the context in which any given set of data has been generated, and what causal influences could have given rise to it.
Update: Dimitrios Diamantaras pointed me via Twitter to Francis Diebold’s dyspeptic review of Mostly Harmless Econometrics – I think the tone of this is harsh and seems to be mainly concerned with the title; but it is worth noting that the two Angrist and Pischke books indeed do not cover time series econometrics.