Agreeing about GDP, disagreeing about the beyond

It’s always a pleasure for me to have a new book about GDP or economic statistics more generally to read (no surprise!) – and this is no longer such a niche taste as it once was. So I very much enjoyed Lorenzo Fioramonti’s The World After GDP: Politics, Business and Society in the Post-Growth Era, a sequel to his Gross Domestic Problem.

The book rightly points to the fact that the many known flaws with GDP as a measure of economic welfare are becoming still more severe. The environmental issues are obvious. There have been longstanding critiques of the omission of much ‘household production’ (although not, as the book seems to suggest, all informal economic activity as it is household services such as cleaning and childcare that are left out; household products such as food is included in the definition of GDP). Still, as the book observes, GDP conceives of firms and governments as productive and households as non-productive and while this has some logic in terms of transactions, it has none in terms of economic welfare.

The digital transformation of the economy is making measurement ever harder: “One of the crucial flaws of GDP is its inability to capture the dynamic value embedded in all sorts of innovations, especially when they reduce costs, distribute access and increase what economists call ‘consumer surplus’.”

Fioramonti also highlights the political economy of GDP – its role as a sole criterion for assessing success or failure, and the growing ‘administrative uses’ such as debt and deficit rules expressed as a percentage of GDP, distorting policies. Unlike Ehsan Masood (in The Great Invention), who argues for a replacement for GDP, Fioramonti favours a suite of indicators. I don’t know which I think is better – there is a strong argument for needing more than one dimension of measurement but there are too many dashboards and sets of goals already. And perhaps people can only pay attention to one summary statistic? Either way, as Fioramonti says here, economic transformations go hand in hand with transformed measurement frameworks.

He makes some points I hadn’t considered and will think about more. For instance, the switch from GNP (the total output of all nationally-owned entities) to GDP (the total output produced within the national territory) provided “an accounting system in support of neoliberal economic globalization,” he writes. That ‘in support of’ suggests intentionality – I don’t know what the historical process was but doubt this was the case. However, it’s an interesting question whether the switch enabled or encouraged globalization (I’ll overlook the n-word).

While there’s much I agree with, there are also points on which I disagree, sometimes strongly. For instance, there is a truly bizarre section on competition, which Fioramonti sees as a wholly negative phenomenon, creating negative externalities. He alludes to competition as ‘random, disorganized interactions’, and yet at the same time describes it as a top down, and centralized process. I’m not sure how he thinks the technological change he refers to elsewhere happens without rivalry between businesses, and clearly sees markets as centralized – but then, how does competition come in to it? He argues also for localized, co-operative production which seems to me a largely romantic dream, which is feasible for some kinds of business (including, perhaps ironically, some ‘sharing’ platforms), but does not scale to an economy capable of providing goods and services to the population as a whole. This section cites Eleanor Ostrom, whose work is indeed marvellous, highlighting the range of possible collective economic institutions, other than markets and centralized states. However, she also underlines that the institutions she explores cannot function beyond a certain scale. The idea that all or many of the products and services in modern economies could be produced at homespun scale is a nonsense. But then, I also think the idea of a post-growth era overlooks the intangible character of modern growth and anyway requires some honesty about what the politics of no-growth would be like – think 10 years of stagnant real earnings, not cosy homesteading.
Still, a certain amount of disagreement adds savour to the book. I really enjoyed reading it, and so will anyone interested in the GDP and measurement debates – and there are plenty of you out there.

Share

Statistics vs truthiness

I thoroughly enjoyed reading Howard Wainer’s Truth or Truthiness: Distinguishing Fact From Fiction By Learning to Think Like a Data Scientist. I even laughed out loud occasionally, as there’s a lot of wit on display here, and one gets a strong sense of Wainer’s personality. This is not usual in a book about statistics (although having said that, Angrist and Pischke also do quite well on the clarity and fun front, especially for econometricians.)

Truth or Truthiness a collection of essays in effect, published as a response to this brave new world of truthiness (ie. lies that people believe because they want to) in politics and public debate. Wainer writes very clearly about statistics in general, and his main theme here, causal inference. This is of course dear to the heart of economists, and gratifyingly Wainer recognises that the profession is more scrupulous than most disciplines about causation. The book starts by underlining the importance of having a clear counterfactual in mind and thinking – thinking! – about how it might be possible to estimate the size of any causal effect. As Wainer puts it, “The real world is hopelessly multivariate,” so untangling the causality is never going to happen without careful thought.

I also discovered that one aspect of something that’s bugged me since my thesis days – when I started disaggregating macro data – namely the pitfalls of aggregation, has a name elsewhere in the scholarly forest: “The ecological fallacy, in which apparent structure exists in grouped (eg average) data that disappaears or even reverses on the individual level.” It seems it’s a commonplace in statistics – here’s one clear explanation I found. Actually, I think the aggregation issues are more extensive in economics; for example I once heard Dave Giles do a brilliant lecture on how time aggregation can lead to spurious autocorrelation results.

Having said how much I enjoyed reading Truth or Truthiness, I’m not sure who it’s aimed at who isn’t already really interested in statistics. For newcomers to Wainer, I’d recommend his wonderful earlier books, Picturing the Uncertain World, and Graphical Discovery. They’re up there with Edward Tufte’s books on intelligent visualisation (rather than the decorative visualisation that’s become unfortunately common).

51ZBzzdWQwL

Share

Facts and values (statistical version)

The trouble with reading two books simultaneously is that it slows down the finishing. But I have now finished a terrific novel, You Don’t Have to Live Like This by Benjamin Markovits – a sort of state of the United States novel except it seems like another age in this grotesque situation of Donald Trump apparently going to become President. And also The Cost of Living in America: A Political History of Economic Statistics, 1880-2000 by Thomas Stapleford.

The title might mark it out as a bit of a niche read – yes, ok – but it is truly a very interesting book. The key underlying message is that all statistics are political, and none more so than a price index. The account is one of the recurring, and recurringly failing, attempts to turn conflicts over the allocation of resources into a technical matter to be resolved by experts. The systematic state collection of statistics is part of the 19th-20th century process of the rationalization of governance as well as being itself “a form of rationalized knowledge making”. Theodore Porter’s well-known Trust in Numbers: The pursuit of Objectivity in Science and Public Life has documented the political appeal of developing and using impersonal, quantitative measures and rules. In my experience, statisticians themselves are far more aware than politicians (or indeed economists) of the highly judgmental nature of their work.

The Cost of Living in America presents the history of the development of price measurement in the US, with a division between the labor movement’s emphasis on the standard of living and cost of living, and the increasingly technocratic development of a price index for use in macroeconomic management. The former began with the study of ‘baskets’ of goods and a debate about what working families needed to maintain their standard of living and keep up with the norm. This was affected by context. For example, the price of certain staples including housing rose faster in wartime. New goods appeared. The debate about price indices increasingly revolved around whether to try to measure the cost of a fixed level of satisfaction, or the cost of a fixed basket of goods?

By the time of the Boskin Commission, this had been resolved decisively in favour of a constant utility index, the minimum change in expenditure needed to keep utility unchanged. (Robert Gordon has since said the Commission under-stated the over-statement of inflation.) This made accounting for quality change and new goods a pressing issue. Many economists started to agree that the statisticians had not adequately accounted for these in their price indices. Economists including Robert Gordon and Zvi Griliches focused on this question, Griliches developing the hedonics approach.

Stapleford writes: “If economists were to claim that their discipline had any claim to neutral technical knowledge, surely that claim required them to have neutral apolitical facts – namely economic statistics. … A constant-utility index was surely the proper form for a cost-of-living index, but the idea that one could compare ‘welfare’ in two different contexts [eg two separate time periods or two countries] without introducing subjective (and probably normative) judgments seemed implausible at best.” Yet applying price indices to macroeconomic analysis of growth or productivity  rather than labour disputes helped depoliticise them. And hedonics tackled the problem by redefining goods as bundles of characteristics. As the book notes, governments became keen on their statisticians applying hedonics, from the mid-90s, when they realised that it implied very rapid declines in some prices and hence higher productivity growth. (And the ‘accuracy’ of price indices is in question again now because of the ‘productivity puzzle‘.)

But this is an uncomfortable resolution. Although this elegant solution links national income statistics to neoclassical utility theory, there seems a category mismatch between a set of accounts measuring total production with the idea that value depends on utility. Setting aside the fact that hedonic methods are not applied to a large proportion of consumer expenditure anyway, this piece of statistical welding is coming under huge strain now because the structure of production in the leading economies is being transformed by digital.

One of the many consequences of the Brexit vote and Trumpery is that economists (and others) are again thinking about distribution. The issue is usually framed as the distribution of growth – when it is there, who gets it? I think the question raised for our economic statistics is far more fundamental: we need to recognise the normative judgements involved in the construction of the growth statistics to begin with. Actually existing macroeconomic statistics embed a tacit set of assumptions about welfare, and a production structure which is becoming redundant. But that of course is my favourite theme.

31nh-hpypcl

Share

Revolt of the data points

Libération has a very interesting article about the politics and ethics of big data. It cites some excellent books, such as Eden Medina’s brilliant Cybernetic Revolutionaries about Project Cybersyn in Allende’s Chile, and Alain Desrosières’ classic The Politics of Large Numbers. It doesn’t mention Francis Spufford’s Red Plenty, which makes exactly this point (and is a cracking read too): «La planification soviétique et l’ultralibéralisme se rejoignent ainsi pour asservir le droit à ses calculs d’utilité.»

There is huge interest in using big data techniques to construct better economic statistics (including on my part!). Here in the UK, the ONS is launching a data science campus. The Turing Institute has just been funded by HSBC to look at economic data (although the release says nothing about the work or the researchers involved, so this looks like very early stages). There’s particular progress on constructing price indices using big data, as in this VoxEU column or the Billion Prices Project.

But, as the Libération article underlines, the utopianism of ‘datacratie’ can tip into a dystopian extreme. The technology looks like it can make the utilitarian project of measuring the costs and benefits of everthing a reality, extracting information from every click, every move, every choice. But when the data points (aka humans) realise what’s happening, they won’t necessarily like it.

Bring on the philosophers and ethicists.

514hfmxmwol-_sx311_bo1204203200_51oevddomkl

 

Share

Talking statistics in the woods

I just spent a couple of days at an excellent conference, The Political Economy of Macroeconomic Indicators, organised under the new Fickle Formulas programme led by Prof Daniel Mügge of the University of Amsterdam. Authors of several of the wave of books about GDP itself were there: me (GDP: A Brief but Affectionate History), Philipp Lepenies (The Power of a Single Number), Lorenzo Fioramonti (Gross Domestic Problem) and Dirk Philipsen’s (The Little Big Number). We also had with us Tom Stapleford (The Cost of Living in America), Brett Christophers (Banking Across Boundaries) and Yoshiko Herrera (Mirrors of the Economy) and Florence Jany-Catrice (The Social Sciences of Quantification) (and also Faut il attendre la croissance?)

It was also a great conference for hearing speakers refer to other books. I’ve already read Matthias Schmeltzer’s The Hegemony of Growth and  Ehsan Masood’s The Great Invention. Classics were mentioned, such as Alain Delarosière’s The Politics of Large Numbers. There were others I definitely need to have a look at. The New Global Rulers, for instance. Jacob Assa’s The Financialization of GDP. Paul Edwards’ A Vast Machine. I did feel the most orthodox of the multi-disciplinary crowd; the other economists would mainly describe themselves as heterodox, I think, and there were moments when I played ‘neoliberal’ bingo, so often was the term used. Good for me, no doubt.

The woods at the Drakenburg conference venue in Hilversum

The woods at the Drakenburg conference venue in Hilversum

 

Share