On being factful

I’ve torn through the late Hans Rosling’s Factfulness, completed by his son and daughter-in-law, Ola Rosling and Anna Rosling Ronnlund. I wish everybody could read it. The message in a nutshell is: learn to reflect about the news you read/hear, and appreciate that there’s a difference between good and better.

The book is written around a multiple choice quiz Rosling administered online and to many audiences he spoke to, which revealed that large majorities of all kinds of audiences (including Nobel prize winners and the Davos elite) get certain basic facts very badly wrong. In a lot of ways, things are not at all great, around the world. But they are a lot better than they used to be, and a lot better than most people think they are. This applies to health and longevity, girls’ education, violent crime, and so on.

In terms of applying a bit of thought to the drama of the news, Rosling gives very practical advice: always think about the size of  number you hear – what is a good comparator, should it be a ratio; don’t always extrapolate in a straight line; don’t mistake the extremes for the typical experience; understand the power of exponential change. He means ‘factfulness’ as an analogue to ‘mindfulness’ (but much more worthwhile).

I like Rosling’s philsosophy: “The goal of higher income is not just bigger piles of money. The gola of longer lives is not just extra time. The ultimate goal is to have the freedom to do what we want.” I agree with his diagnosis that fear plays a big part in our responses to news: “Critical thinking is always difficult but it’s almost impossible when we’re scared.”

Although (to boast a bit) I got 12/13 on the quiz, I learned a lot from the book. For instance, I was struck by his argument that given we know 88% of the world’s children now get vaccinated, this tells us most countries can now sustain the logistics and infrastructure of the necessary cool chains; “This is exactly the same infrastructure needed to establish new factories,” Rosling writes. Yet 88% of the investors he quizzed thought only 20% of children get vaccinated so don’t know about that opportunity. He also argues that big pharma is missing a business opportunity in its focus on expensive new drugs for the richest markets, when they could offer older drugs at lower prices in the extensive middle income markets.

The book is also a delightful read, full of personal anecdote. One can hear his voice, which is a testament to the loving work his son and daughter-in-law put into preparing it for publication. Also worth a look is Anna Rosling Ronnund’s fascinating Dollar Street photography project.


Politics and numbers

I’m thoroughly enjoying William Deringer’s Calculated Values: Finance, Politics and the Quantitative Age – almost finished. The book asks, why from the early 18th century did argument about numbers, statistics, come to have a special weight in political debate? Often, the growing role of statistics (as in numbers relevant to the state) is seen as part of the general Enlightenment spread of scientific discourse and rationality. Deringer argues that in fact the deployment of numerical calculation about the state of the nation was one of the weapons of choice in the bitter political division between Whigs and Tories. The sphere of public debate was, as he puts it, “uncivil, impolite, sometimes irrational”.

The book presents plentiful evidence of this use of numbers. For instance, pamphlets proliferated about hot issues such as the payment to be made to Scotland for its Union with England, or the balance of trade – and were packed with errors. Printers, and presumably readers, “did not seem to care about getting the numbers right.” They were only there to slug against somebody else’s numbers. Deringer writes (in the context of the fierce debate about the ‘Equivalent’ England should pay Scotland on their Union], “Critics of quantification in the modern era argue that efforts to address political questions through quantitative means often have the effect of foreclosing political controversy by translating substantive political questions into ‘technical’ ones. This was not the case in 1706; if anything, the opposite was true. The Equivalent calculation brought new (technical) questions into public view and revealed their political stakes.”

The book therefore argues that the emergence of partisan politics after the 1688 Revolution was a major reason for the emergence of the political life of numbers. Starting with a more or less blank slate, a good part of the contest was epistemological: what were the right categories and classifications to even begin to measure? What was the right methodology? For example, one innovation was the introduction of discounting future values, something many contemporaries found both hard to understand and unsettling: “It placed almotst no value on anything that happened beyond one human lifetime. This peculaira claim clashed violently with many Britons’ intuition about what the future was worth to them.” It probably still clashes with intuition, to the extent the typical Briton thinks about it.

Of course, one can’t read this history and not think about the notoriously misleading £350m claim on the Brexit bus, or the claim and counter-claim about the likely Brexit effect on UK trade. It isn’t that both sides are equally true or false – far from it – but the political clash is putting statistics centre stage. David Hume was sceptical about the usefulness of the quantitative debate for exactly this reason: “Every man who has ever reasoned on this subject, has always proved his theory, whatever it was, by facts and calculations.” I’m with Deringer in concluding that nevertheless the conversation about numbers is essential.

As Calculated Values concludes, “modern quantitative culture is fundamentally two-sided.” People think numerical evidence has a special, trustworthy status; and at the same time that it’s especially easy to lie with statistics. These are two sides of the same coin. The adversarial debate reveals how useful the numbers and calculations can be to interrogate and test the opposing claims.

Attitudes have shifted over time. In the 19th century, the partisan contest abated. Dickens for one saw numbers as a dry, soulless window on society when he portrayed Mr Gradgrind in Hard Times. The bureacratisation of statistics, and development of official agencies, made statistics seem just a tiny bit dull. Needless to say, they are centre stage again now for reasons of both political conflict and epistemological uncertainty. Once again, some politicians wield numbers without any great concern about their accuracy or meaningfulness; the victory in debate is all that matters. Once again, given the profound changes in the structure of the economy, we can’t be sure what categories and methods will give us the understanding we would like. This is a terrific book for reflecting on contested and uncertain statistical terrain.


Economic observation

On Friday all the researchers in the new Economic Statistics Centre of Excellence (ESCoE) met at its home in the National Institute to catch up on the range of projects and it was terrific to hear about the progress and challenges across the entire span of the research programme.

One of the projects is concerned with measuring uncertainty in economic statistics and communicating that uncertainty. The discussion sent me back to Oskar Morgenstern’s 1950 On the Accuracy of Economic Observations (I have the 2nd, 1963, edition). It’s a brilliant book, too little remembered. Morgenstern is somewhat pessimistic about both how meaningful economic statistics can be and whether people will ever get their heads around the inherent uncertainty.

“The indisputable fact that our final gross national product or national income data cannot possibly be free of error raises the question whether the computation of growth rates has any value whatsoever,” he writes, after showing that even small errors in levels data imply big margins of error in growth rates.

On the communications front, he noted that members of the public were often suspicious of economic statistics – and rightly so: “The professional users of economic and social statistics strangely enough often seem to be less skeptical than the public.” Yet, he added, public trust was essential both to deliver the appropriations of funding for statistical agencies and so that people had the confidence to provide information to statisticians.

I do find it odd that many economists download the productivity data from standard online sources uncritically and pronounce on the ‘puzzle’ of its zero growth when so many providers of raw data point (businesses in this case) out that from their perspective there are significant productivity gains. But that’s what the ESCoE is about – trying to resolve a different puzzle, that of two contradictory sets of evidence – and it’s keeping me gainfully occupied.


Agreeing about GDP, disagreeing about the beyond

It’s always a pleasure for me to have a new book about GDP or economic statistics more generally to read (no surprise!) – and this is no longer such a niche taste as it once was. So I very much enjoyed Lorenzo Fioramonti’s The World After GDP: Politics, Business and Society in the Post-Growth Era, a sequel to his Gross Domestic Problem.

The book rightly points to the fact that the many known flaws with GDP as a measure of economic welfare are becoming still more severe. The environmental issues are obvious. There have been longstanding critiques of the omission of much ‘household production’ (although not, as the book seems to suggest, all informal economic activity as it is household services such as cleaning and childcare that are left out; household products such as food is included in the definition of GDP). Still, as the book observes, GDP conceives of firms and governments as productive and households as non-productive and while this has some logic in terms of transactions, it has none in terms of economic welfare.

The digital transformation of the economy is making measurement ever harder: “One of the crucial flaws of GDP is its inability to capture the dynamic value embedded in all sorts of innovations, especially when they reduce costs, distribute access and increase what economists call ‘consumer surplus’.”

Fioramonti also highlights the political economy of GDP – its role as a sole criterion for assessing success or failure, and the growing ‘administrative uses’ such as debt and deficit rules expressed as a percentage of GDP, distorting policies. Unlike Ehsan Masood (in The Great Invention), who argues for a replacement for GDP, Fioramonti favours a suite of indicators. I don’t know which I think is better – there is a strong argument for needing more than one dimension of measurement but there are too many dashboards and sets of goals already. And perhaps people can only pay attention to one summary statistic? Either way, as Fioramonti says here, economic transformations go hand in hand with transformed measurement frameworks.

He makes some points I hadn’t considered and will think about more. For instance, the switch from GNP (the total output of all nationally-owned entities) to GDP (the total output produced within the national territory) provided “an accounting system in support of neoliberal economic globalization,” he writes. That ‘in support of’ suggests intentionality – I don’t know what the historical process was but doubt this was the case. However, it’s an interesting question whether the switch enabled or encouraged globalization (I’ll overlook the n-word).

While there’s much I agree with, there are also points on which I disagree, sometimes strongly. For instance, there is a truly bizarre section on competition, which Fioramonti sees as a wholly negative phenomenon, creating negative externalities. He alludes to competition as ‘random, disorganized interactions’, and yet at the same time describes it as a top down, and centralized process. I’m not sure how he thinks the technological change he refers to elsewhere happens without rivalry between businesses, and clearly sees markets as centralized – but then, how does competition come in to it? He argues also for localized, co-operative production which seems to me a largely romantic dream, which is feasible for some kinds of business (including, perhaps ironically, some ‘sharing’ platforms), but does not scale to an economy capable of providing goods and services to the population as a whole. This section cites Eleanor Ostrom, whose work is indeed marvellous, highlighting the range of possible collective economic institutions, other than markets and centralized states. However, she also underlines that the institutions she explores cannot function beyond a certain scale. The idea that all or many of the products and services in modern economies could be produced at homespun scale is a nonsense. But then, I also think the idea of a post-growth era overlooks the intangible character of modern growth and anyway requires some honesty about what the politics of no-growth would be like – think 10 years of stagnant real earnings, not cosy homesteading.
Still, a certain amount of disagreement adds savour to the book. I really enjoyed reading it, and so will anyone interested in the GDP and measurement debates – and there are plenty of you out there.


Statistics vs truthiness

I thoroughly enjoyed reading Howard Wainer’s Truth or Truthiness: Distinguishing Fact From Fiction By Learning to Think Like a Data Scientist. I even laughed out loud occasionally, as there’s a lot of wit on display here, and one gets a strong sense of Wainer’s personality. This is not usual in a book about statistics (although having said that, Angrist and Pischke also do quite well on the clarity and fun front, especially for econometricians.)

Truth or Truthiness a collection of essays in effect, published as a response to this brave new world of truthiness (ie. lies that people believe because they want to) in politics and public debate. Wainer writes very clearly about statistics in general, and his main theme here, causal inference. This is of course dear to the heart of economists, and gratifyingly Wainer recognises that the profession is more scrupulous than most disciplines about causation. The book starts by underlining the importance of having a clear counterfactual in mind and thinking – thinking! – about how it might be possible to estimate the size of any causal effect. As Wainer puts it, “The real world is hopelessly multivariate,” so untangling the causality is never going to happen without careful thought.

I also discovered that one aspect of something that’s bugged me since my thesis days – when I started disaggregating macro data – namely the pitfalls of aggregation, has a name elsewhere in the scholarly forest: “The ecological fallacy, in which apparent structure exists in grouped (eg average) data that disappaears or even reverses on the individual level.” It seems it’s a commonplace in statistics – here’s one clear explanation I found. Actually, I think the aggregation issues are more extensive in economics; for example I once heard Dave Giles do a brilliant lecture on how time aggregation can lead to spurious autocorrelation results.

Having said how much I enjoyed reading Truth or Truthiness, I’m not sure who it’s aimed at who isn’t already really interested in statistics. For newcomers to Wainer, I’d recommend his wonderful earlier books, Picturing the Uncertain World, and Graphical Discovery. They’re up there with Edward Tufte’s books on intelligent visualisation (rather than the decorative visualisation that’s become unfortunately common).