Ultimate price

The weeks are speeding past in a blur of Zooms. However, I’ve found time to read – alongside the rivalry, gore, sex and drama of Tom Hollander’s RubiconUltimate Price: The Value We Place on Life by Howard Steven Friedman.

This is an unpromising subject in some ways: policies and regulations often place an implicit, or sometimes explicit, value on human life. How is that value determined? Is it consistent in different domains? What’s the fair and ethical way to make such judgments, which are inevitable when it comes to deciding which costly safety measures to enforce through regulation or whether to purchase an expensive drug for medical treatments, or how much to compensate crime or accident victims.

The short answers are that the value of life – even the dry-sounding Value of a Statistical Life – is inconsistently established and rarely debated. Monetary values and lives never feel like they belong in the same conversation. Rather, the issue is often the subject of industry lobbying and political horsetrading. Some regulations – eg environmental ones imposing costs on industry – are massively scrutinised. Others – eg extra airline security – are not. Some lives are valued at millions, others – including foreign civilians killed by drones, say – are not valued at all. Cost benefit analysis is sometimes uncomfortable, consistent cost benefit analysis even more so.

I picked up the book with a slight sense of duty as I do one lecture on this area, and can report that it’s very clear and well-informed. It has loads of thought-provoking examples. There is a US focus (health insurance) but the applications are wide. It will make a great supplementary read, giving students loads of examples to get them thinking and plenty of additional references. There isn’t as much as you might think giving an overview of these issues so I welcome this one and will add it to the reading list.

41u8lwg0maL._SX329_BO1,204,203,200_

Share

What counts?

After hating the book of the moment, Shoshan Zuboff’s much-praised Surveillance Capitalism, perhaps it underlines my contrariness if I tell you how much I loved my latest read, a book about classification. It was Sorting Things Out by Geoffrey Bowker and Susan Star, quite old now (1999). I can’t remember how I stumbled across it, but it absolutely speaks to my preoccupation with the fact that we see what we count & not the other way around.

The book investigates the confluence of social organisation, ethics and technologies of record-keeping as manifest in the establishment of systems of classification and standards. The examples it uses are medical systems such as diagnostic manuals, but the arguments apply more broadly. The point it makes about the role of record keeping technologies reminded me of a terrific book I read last year, Accounting for Slavery by Caitlin Rosenthal, which explored the role of commercially produced record books in the managerialism of large slave plantations in the US. The argument that a classification system lends the authority of something seemingly tehnocratic to highly political or ethical choices echoes Tom Stapleford’s wonderful book The Cost of Living in America.

As Bowker and Star point out, classification systems shape people’s behaviour. They come to seem like natural rather than constructed objects. They also fix perceptions of social relations, as a classification framework or set of standards, “[M]akes a certain set of discoveries, which validate its own framewor, much more likely than an alternative set outside the framework.” To switch frameworks requires overcoming a bootstrapping problem – you can’t demonstrate that a new one is superior because you don’t yet have the units of data on which it relies. People can’t see what they take for granted until there is an alternative version not taking the same things for granted.

And, although this book was written early in the internet era, the authors note that “Software is frozen organisational and policy discourse” – as we are learning with the burgeoning debate about algorithmic accountability. The essential ambiguity of politics is impossible to embed in code. The big data and AI era will force some of the fudged issues into the open.

 

Share

On being factful

I’ve torn through the late Hans Rosling’s Factfulness, completed by his son and daughter-in-law, Ola Rosling and Anna Rosling Ronnlund. I wish everybody could read it. The message in a nutshell is: learn to reflect about the news you read/hear, and appreciate that there’s a difference between good and better.

The book is written around a multiple choice quiz Rosling administered online and to many audiences he spoke to, which revealed that large majorities of all kinds of audiences (including Nobel prize winners and the Davos elite) get certain basic facts very badly wrong. In a lot of ways, things are not at all great, around the world. But they are a lot better than they used to be, and a lot better than most people think they are. This applies to health and longevity, girls’ education, violent crime, and so on.

In terms of applying a bit of thought to the drama of the news, Rosling gives very practical advice: always think about the size of  number you hear – what is a good comparator, should it be a ratio; don’t always extrapolate in a straight line; don’t mistake the extremes for the typical experience; understand the power of exponential change. He means ‘factfulness’ as an analogue to ‘mindfulness’ (but much more worthwhile).

I like Rosling’s philsosophy: “The goal of higher income is not just bigger piles of money. The gola of longer lives is not just extra time. The ultimate goal is to have the freedom to do what we want.” I agree with his diagnosis that fear plays a big part in our responses to news: “Critical thinking is always difficult but it’s almost impossible when we’re scared.”

Although (to boast a bit) I got 12/13 on the quiz, I learned a lot from the book. For instance, I was struck by his argument that given we know 88% of the world’s children now get vaccinated, this tells us most countries can now sustain the logistics and infrastructure of the necessary cool chains; “This is exactly the same infrastructure needed to establish new factories,” Rosling writes. Yet 88% of the investors he quizzed thought only 20% of children get vaccinated so don’t know about that opportunity. He also argues that big pharma is missing a business opportunity in its focus on expensive new drugs for the richest markets, when they could offer older drugs at lower prices in the extensive middle income markets.

The book is also a delightful read, full of personal anecdote. One can hear his voice, which is a testament to the loving work his son and daughter-in-law put into preparing it for publication. Also worth a look is Anna Rosling Ronnund’s fascinating Dollar Street photography project.

Share

Politics and numbers

I’m thoroughly enjoying William Deringer’s Calculated Values: Finance, Politics and the Quantitative Age – almost finished. The book asks, why from the early 18th century did argument about numbers, statistics, come to have a special weight in political debate? Often, the growing role of statistics (as in numbers relevant to the state) is seen as part of the general Enlightenment spread of scientific discourse and rationality. Deringer argues that in fact the deployment of numerical calculation about the state of the nation was one of the weapons of choice in the bitter political division between Whigs and Tories. The sphere of public debate was, as he puts it, “uncivil, impolite, sometimes irrational”.

The book presents plentiful evidence of this use of numbers. For instance, pamphlets proliferated about hot issues such as the payment to be made to Scotland for its Union with England, or the balance of trade – and were packed with errors. Printers, and presumably readers, “did not seem to care about getting the numbers right.” They were only there to slug against somebody else’s numbers. Deringer writes (in the context of the fierce debate about the ‘Equivalent’ England should pay Scotland on their Union], “Critics of quantification in the modern era argue that efforts to address political questions through quantitative means often have the effect of foreclosing political controversy by translating substantive political questions into ‘technical’ ones. This was not the case in 1706; if anything, the opposite was true. The Equivalent calculation brought new (technical) questions into public view and revealed their political stakes.”

The book therefore argues that the emergence of partisan politics after the 1688 Revolution was a major reason for the emergence of the political life of numbers. Starting with a more or less blank slate, a good part of the contest was epistemological: what were the right categories and classifications to even begin to measure? What was the right methodology? For example, one innovation was the introduction of discounting future values, something many contemporaries found both hard to understand and unsettling: “It placed almotst no value on anything that happened beyond one human lifetime. This peculaira claim clashed violently with many Britons’ intuition about what the future was worth to them.” It probably still clashes with intuition, to the extent the typical Briton thinks about it.

Of course, one can’t read this history and not think about the notoriously misleading £350m claim on the Brexit bus, or the claim and counter-claim about the likely Brexit effect on UK trade. It isn’t that both sides are equally true or false – far from it – but the political clash is putting statistics centre stage. David Hume was sceptical about the usefulness of the quantitative debate for exactly this reason: “Every man who has ever reasoned on this subject, has always proved his theory, whatever it was, by facts and calculations.” I’m with Deringer in concluding that nevertheless the conversation about numbers is essential.

As Calculated Values concludes, “modern quantitative culture is fundamentally two-sided.” People think numerical evidence has a special, trustworthy status; and at the same time that it’s especially easy to lie with statistics. These are two sides of the same coin. The adversarial debate reveals how useful the numbers and calculations can be to interrogate and test the opposing claims.

Attitudes have shifted over time. In the 19th century, the partisan contest abated. Dickens for one saw numbers as a dry, soulless window on society when he portrayed Mr Gradgrind in Hard Times. The bureacratisation of statistics, and development of official agencies, made statistics seem just a tiny bit dull. Needless to say, they are centre stage again now for reasons of both political conflict and epistemological uncertainty. Once again, some politicians wield numbers without any great concern about their accuracy or meaningfulness; the victory in debate is all that matters. Once again, given the profound changes in the structure of the economy, we can’t be sure what categories and methods will give us the understanding we would like. This is a terrific book for reflecting on contested and uncertain statistical terrain.

Share

Economic observation

On Friday all the researchers in the new Economic Statistics Centre of Excellence (ESCoE) met at its home in the National Institute to catch up on the range of projects and it was terrific to hear about the progress and challenges across the entire span of the research programme.

One of the projects is concerned with measuring uncertainty in economic statistics and communicating that uncertainty. The discussion sent me back to Oskar Morgenstern’s 1950 On the Accuracy of Economic Observations (I have the 2nd, 1963, edition). It’s a brilliant book, too little remembered. Morgenstern is somewhat pessimistic about both how meaningful economic statistics can be and whether people will ever get their heads around the inherent uncertainty.

“The indisputable fact that our final gross national product or national income data cannot possibly be free of error raises the question whether the computation of growth rates has any value whatsoever,” he writes, after showing that even small errors in levels data imply big margins of error in growth rates.

On the communications front, he noted that members of the public were often suspicious of economic statistics – and rightly so: “The professional users of economic and social statistics strangely enough often seem to be less skeptical than the public.” Yet, he added, public trust was essential both to deliver the appropriations of funding for statistical agencies and so that people had the confidence to provide information to statisticians.

I do find it odd that many economists download the productivity data from standard online sources uncritically and pronounce on the ‘puzzle’ of its zero growth when so many providers of raw data point (businesses in this case) out that from their perspective there are significant productivity gains. But that’s what the ESCoE is about – trying to resolve a different puzzle, that of two contradictory sets of evidence – and it’s keeping me gainfully occupied.

Share