Statistics vs truthiness

I thoroughly enjoyed reading Howard Wainer’s Truth or Truthiness: Distinguishing Fact From Fiction By Learning to Think Like a Data Scientist. I even laughed out loud occasionally, as there’s a lot of wit on display here, and one gets a strong sense of Wainer’s personality. This is not usual in a book about statistics (although having said that, Angrist and Pischke also do quite well on the clarity and fun front, especially for econometricians.)

Truth or Truthiness a collection of essays in effect, published as a response to this brave new world of truthiness (ie. lies that people believe because they want to) in politics and public debate. Wainer writes very clearly about statistics in general, and his main theme here, causal inference. This is of course dear to the heart of economists, and gratifyingly Wainer recognises that the profession is more scrupulous than most disciplines about causation. The book starts by underlining the importance of having a clear counterfactual in mind and thinking – thinking! – about how it might be possible to estimate the size of any causal effect. As Wainer puts it, “The real world is hopelessly multivariate,” so untangling the causality is never going to happen without careful thought.

I also discovered that one aspect of something that’s bugged me since my thesis days – when I started disaggregating macro data – namely the pitfalls of aggregation, has a name elsewhere in the scholarly forest: “The ecological fallacy, in which apparent structure exists in grouped (eg average) data that disappaears or even reverses on the individual level.” It seems it’s a commonplace in statistics – here’s one clear explanation I found. Actually, I think the aggregation issues are more extensive in economics; for example I once heard Dave Giles do a brilliant lecture on how time aggregation can lead to spurious autocorrelation results.

Having said how much I enjoyed reading Truth or Truthiness, I’m not sure who it’s aimed at who isn’t already really interested in statistics. For newcomers to Wainer, I’d recommend his wonderful earlier books, Picturing the Uncertain World, and Graphical Discovery. They’re up there with Edward Tufte’s books on intelligent visualisation (rather than the decorative visualisation that’s become unfortunately common).

51ZBzzdWQwL

Share

Facts and values (statistical version)

The trouble with reading two books simultaneously is that it slows down the finishing. But I have now finished a terrific novel, You Don’t Have to Live Like This by Benjamin Markovits – a sort of state of the United States novel except it seems like another age in this grotesque situation of Donald Trump apparently going to become President. And also The Cost of Living in America: A Political History of Economic Statistics, 1880-2000 by Thomas Stapleford.

The title might mark it out as a bit of a niche read – yes, ok – but it is truly a very interesting book. The key underlying message is that all statistics are political, and none more so than a price index. The account is one of the recurring, and recurringly failing, attempts to turn conflicts over the allocation of resources into a technical matter to be resolved by experts. The systematic state collection of statistics is part of the 19th-20th century process of the rationalization of governance as well as being itself “a form of rationalized knowledge making”. Theodore Porter’s well-known Trust in Numbers: The pursuit of Objectivity in Science and Public Life has documented the political appeal of developing and using impersonal, quantitative measures and rules. In my experience, statisticians themselves are far more aware than politicians (or indeed economists) of the highly judgmental nature of their work.

The Cost of Living in America presents the history of the development of price measurement in the US, with a division between the labor movement’s emphasis on the standard of living and cost of living, and the increasingly technocratic development of a price index for use in macroeconomic management. The former began with the study of ‘baskets’ of goods and a debate about what working families needed to maintain their standard of living and keep up with the norm. This was affected by context. For example, the price of certain staples including housing rose faster in wartime. New goods appeared. The debate about price indices increasingly revolved around whether to try to measure the cost of a fixed level of satisfaction, or the cost of a fixed basket of goods?

By the time of the Boskin Commission, this had been resolved decisively in favour of a constant utility index, the minimum change in expenditure needed to keep utility unchanged. (Robert Gordon has since said the Commission under-stated the over-statement of inflation.) This made accounting for quality change and new goods a pressing issue. Many economists started to agree that the statisticians had not adequately accounted for these in their price indices. Economists including Robert Gordon and Zvi Griliches focused on this question, Griliches developing the hedonics approach.

Stapleford writes: “If economists were to claim that their discipline had any claim to neutral technical knowledge, surely that claim required them to have neutral apolitical facts – namely economic statistics. … A constant-utility index was surely the proper form for a cost-of-living index, but the idea that one could compare ‘welfare’ in two different contexts [eg two separate time periods or two countries] without introducing subjective (and probably normative) judgments seemed implausible at best.” Yet applying price indices to macroeconomic analysis of growth or productivity  rather than labour disputes helped depoliticise them. And hedonics tackled the problem by redefining goods as bundles of characteristics. As the book notes, governments became keen on their statisticians applying hedonics, from the mid-90s, when they realised that it implied very rapid declines in some prices and hence higher productivity growth. (And the ‘accuracy’ of price indices is in question again now because of the ‘productivity puzzle‘.)

But this is an uncomfortable resolution. Although this elegant solution links national income statistics to neoclassical utility theory, there seems a category mismatch between a set of accounts measuring total production with the idea that value depends on utility. Setting aside the fact that hedonic methods are not applied to a large proportion of consumer expenditure anyway, this piece of statistical welding is coming under huge strain now because the structure of production in the leading economies is being transformed by digital.

One of the many consequences of the Brexit vote and Trumpery is that economists (and others) are again thinking about distribution. The issue is usually framed as the distribution of growth – when it is there, who gets it? I think the question raised for our economic statistics is far more fundamental: we need to recognise the normative judgements involved in the construction of the growth statistics to begin with. Actually existing macroeconomic statistics embed a tacit set of assumptions about welfare, and a production structure which is becoming redundant. But that of course is my favourite theme.

31nh-hpypcl

Share

Revolt of the data points

Libération has a very interesting article about the politics and ethics of big data. It cites some excellent books, such as Eden Medina’s brilliant Cybernetic Revolutionaries about Project Cybersyn in Allende’s Chile, and Alain Desrosières’ classic The Politics of Large Numbers. It doesn’t mention Francis Spufford’s Red Plenty, which makes exactly this point (and is a cracking read too): «La planification soviétique et l’ultralibéralisme se rejoignent ainsi pour asservir le droit à ses calculs d’utilité.»

There is huge interest in using big data techniques to construct better economic statistics (including on my part!). Here in the UK, the ONS is launching a data science campus. The Turing Institute has just been funded by HSBC to look at economic data (although the release says nothing about the work or the researchers involved, so this looks like very early stages). There’s particular progress on constructing price indices using big data, as in this VoxEU column or the Billion Prices Project.

But, as the Libération article underlines, the utopianism of ‘datacratie’ can tip into a dystopian extreme. The technology looks like it can make the utilitarian project of measuring the costs and benefits of everthing a reality, extracting information from every click, every move, every choice. But when the data points (aka humans) realise what’s happening, they won’t necessarily like it.

Bring on the philosophers and ethicists.

514hfmxmwol-_sx311_bo1204203200_51oevddomkl

 

Share

Talking statistics in the woods

I just spent a couple of days at an excellent conference, The Political Economy of Macroeconomic Indicators, organised under the new Fickle Formulas programme led by Prof Daniel Mügge of the University of Amsterdam. Authors of several of the wave of books about GDP itself were there: me (GDP: A Brief but Affectionate History), Philipp Lepenies (The Power of a Single Number), Lorenzo Fioramonti (Gross Domestic Problem) and Dirk Philipsen’s (The Little Big Number). We also had with us Tom Stapleford (The Cost of Living in America), Brett Christophers (Banking Across Boundaries) and Yoshiko Herrera (Mirrors of the Economy) and Florence Jany-Catrice (The Social Sciences of Quantification) (and also Faut il attendre la croissance?)

It was also a great conference for hearing speakers refer to other books. I’ve already read Matthias Schmeltzer’s The Hegemony of Growth and  Ehsan Masood’s The Great Invention. Classics were mentioned, such as Alain Delarosière’s The Politics of Large Numbers. There were others I definitely need to have a look at. The New Global Rulers, for instance. Jacob Assa’s The Financialization of GDP. Paul Edwards’ A Vast Machine. I did feel the most orthodox of the multi-disciplinary crowd; the other economists would mainly describe themselves as heterodox, I think, and there were moments when I played ‘neoliberal’ bingo, so often was the term used. Good for me, no doubt.

The woods at the Drakenburg conference venue in Hilversum

The woods at the Drakenburg conference venue in Hilversum

 

Share

National wellbeing

I just read [amazon_link id=”1118489578″ target=”_blank” ]The Wellbeing of Nations[/amazon_link] by Paul Allin and David Hand, a very nice overview of the issues in going ‘Beyond GDP’. It came out in 2014, about the same time as my [amazon_link id=”0691169853″ target=”_blank” ]GDP: A Brief but Affectionate History[/amazon_link], so unfortunately I’d not had chance to read it before writing mine. In the couple of years since, the momentum behind the agenda to go ‘beyond’ has certainly increased. This book is a very clear, and rigorous but non-technical explanation of the scope of the issues, and the state of play. As Allin and Hand describe, there has been a good deal of work on looking at alternative ways of defining and measuring ‘wellbeing’ directly, and at wider approaches to assessing whether or not society is progressing.

[amazon_image id=”1118489578″ link=”true” target=”_blank” size=”medium” ]The Wellbeing of Nations: Meaning, Motive and Measurement[/amazon_image]

I am more cautious than they are about any survey-based direct measurement of wellbeing. There seems to be a lot still to understand about the psychology, and about how people’s judgements are formed. After all, we don’t just introspect, we’re also influenced by social context – have we just read an upbeat book about progress? or rather, just read the execrable Daily Express? I’m more with the programme when the book looks at how to (greatly) improve what we do now. For instance, report net national income per capita, not total GDP. Include income distribution and environmental measures. As they note, there are already statistics on many indicators that would give a richer picture of economic welfare. Jones and Klenow have a very nice recent paper on a single summary measure of aggregate economic welfare rooted in economic theory: it calculates a consumption equivalent measure combining income/leisure, distribution and life expectancy. This omits questions of environmental sustainability but good progress is being made on environmental ‘satellite’ accounts and natural capital measurement.

There are some important questions not addressed by Allin and Hand. They describe a proliferation of approaches to measuring wellbeing and indeed call for a thousand flowers to bloom. In my view, if there is no narrowing down of the options, the existing standard of GDP and the conventional national accounts will be far harder to dislodge. A new focal point is needed. (I have a paper on this out soon. Others – like Ehsan Masood in [amazon_link id=”1681771373″ target=”_blank” ]The Great Invention[/amazon_link] – call for a single index for this reason although for different single indices.)  The reason is not tidy-mindedness, but rather the role that official economic statistics play in holding governments to account.

The other question ignored by all of what you could describe as the pro-wellbeing literature (not that I’m against well-being) is innovation. In disparaging GDP growth as a metric, they overlook the fact that GDP growth is not mainly about more shoes, food and vehicles of the same kind, it is mainly the introduction of innovations, from small changes in variety to profound new technologies like the smartphone or the personalised cancer treatment. GDP doesn’t measure these well, and there is a fuzziness as between quality change potentially reflected in prices and real growth, and unmeasurable consumer surplus. But innovation is a huge contributor to wellbeing and people will continue to like ‘growth’. No-growth is a non-starter outside authoritarian and autarkic polities.

These caveats aside, I really liked the book and it is well worth a read if you’re interested in this territory. As many people are – statistics is the new rock and roll.

Share