Numbers, objectivity and meaning

I’ve carved out as many empty days as possible this summer to make significant headway with my next book, and as well as writing I’ve been re-reading some golden oldies. One is Theodore Porter’s classic Trust In Numbers: The Pursuit of Objectivity in Science and Public Life. For those who haven’t read it, it’s a historical exploration of the pursuit of quantification in economic domains (accounting, cost benefit analysis) as an expression of objectivity. A core argument is that the assertion of quantified objectivity is a signal of a group’s lack of power rather than the opposite; powerful groups or people expect to have their judgment trusted.

This is counter-intuitive if one has read so often that the deployment of numbers is the way social and political power is exerted by economists and others. But the case Porter makes is persuasive, certainly as far as the historical origins of quantification go. He also acknowledges that objectivity has become a desired characteristics of societies governed by the rule of law: “A decision made by the numbers … has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand. … Quantification is a way of making decisions without seeming to decide.” But he adds: “Objectivity lends authority to officials who have very little of their own.” So numbers have the dual purpose of signalling impartiality and thereby giving authority to the number-producers: “The reputation of accounts and statistics for grayness helps to maintain their authority.”

My book will be anything but gray. It is looking at how economic statistics are constructed and how inadequate they have become as metrics of social progress (or its absence) given the technology-driven changes in the structure of the economy as well as the imperatives of making the environment count. These changes have been under way at least since my first book The Weightless World was published 26 years ago, but the social process of constructing the statistics is a slow one, carried out within the expert community of national statisticians. Thinking about how to replace what we have now – given the issues I highlighted in GDP – involves some deeply conceptual and philosophical questions.

Screenshot 2023-08-26 at 11.10.32

Politics by numbers

Limits of the Numerical: the abuses and uses of quantification (editors Christopher Newfield, my dear colleague Anna Alexandrova, and Stephen John) is right up my street. It’s a collection of thoughtful essays exploring exactly what the subtitle says, some general and some relating to specific issues of quantification in education and healthcare, and (by Anna and Ramandeep Singh) the measurement of wellbeing.

The aim of the volume is to move beyind what it characterises as the Original Critique (set out in some classics like Desrosières and Porter). This critique points to the historical context of measurement and quantification, and the use of the presumed authority and objectivity of numbers as an instrument of politics or power. As the introduction points out, the Original Critique assumes this deployment of numerical thinking is successful. Yet in recent years expertise – generally involving evidence and numbers – has been steadily demonized, and independent evidence-based agencies have either been targets for populism or lost legitimacy more broadly.

One sign of the demise of expertise was the New Public Management era of Blair and Clinton, disciplining experts by use of quantitative targets. Subsequently populists have deployed numbers against experts too, but in their case appealing to public opinion or crowd size. Thus, “The numerical idiom became just another part of the degraded rhetoric of politics.” But perhaps the real nail in the coffin of quantified expertise was the financial crisis and its aftermath: whatever the experts were doing, it didn’t deliver: “Against the promise of governance by the numbers, the 2008 meltdown and everyday experience both revealed a regime of incompetence, political interference and elite bias.”

There are interesting dives into the spread and effects of higher education rankings – as the book notes, audit culture here has become performative, bringing into being the measureable phenomena; orphan drugs – featuring the “unholy alliance” between patients and pharma companies seeking to profit from financial incentives to develop drugs for rare diseases; and climate science.

I was particularly interested in the chapter on the measurement of wellbeing as Anna and I are co-authors on this subject, and indeed I attended an excellent conference on wellbeing research last week. The chapter sets out the different definitions and hence methods of quantification of wellbeing and sets out what it calls “Letwin’s dilemma” after the former minister who helped introduce wellbeing measurement to UK official statistics. The dilemma is that most philosophical and psychological approaches recognise separate dimensions of wellbeing, but a political sponsor needs a simple, measurable and comparable concept  to compete with other ‘hard’ metrics in the choices facing givernments. This is a genuine dilemma; but the chapter argues against what has become the single metric in policy debates, a scale-based measure of Life Satisfaction, as too reductive and detached from underlying psychological phenomena. This is exactly the debate I had at the conference with Richard Layard, the UK’s most influential researcher on wellbeing.

I polished off the book in just over a day, and it would interest anybody working on the sociology and politics of measurement and related areas, taking forward the now well-known critique of quantification and linking it persuasively to the political trends of the past decade.

Screenshot 2022-07-09 at 13.14.54

The joy of measurement

Beyond Measure: The Hidden History of Measurement by James Vincent had the good fortune to be published more or less when the Johnson government here decided a good wheeze to distract voters’ attention from – well, everything – would be to launch a consultation on the reintroduction of imperial measures alongside metric. The recent by-election results suggest the wheeze failed; maybe the population just isn’t overwhelmed with excitement by the subject of measurement standards.

I am of course one of the minority who does find measurement an unbelievably exciting subject, and I enjoyed reading the book. It uses a chronological structure to explore different aspects of measurement, starting with the ancient world and the emergence of standard measures for trade, moving on to early modern states, the Scientific Revolution and French Revolution, with a separate chapter on early statistics, and ending up indeed with a chapter subtitled “Metric vs imperial and metrology’s culture war” and finally measurement today (including the concept of the ‘quantified self’.

I quite like this summary of the purpose of measurement: “From the ancient world onwards, measurement has been embraced not only for its practical benefits – for its utility in tasks like construction and trade – but also for its ability to create a zone of shared expectations and rules; to mediate our experience with the world and one another, ensuring that the interactions between two strangers who live under the same set of measures can be validated and trusted.” This purpose of mutual benefit sits alongside the use of measurement by rulers and states – and the book indeed cites classic authors like James Scott and Theodore Porter.

Although books such as the wonderful Seeing Like A State, or more recent ones on specific areas of measurement such as Andrew Whitby’s recent The Sum of The People about the history of the census, obviously have a lot more detail than the chapters here, Beyond Measure delivers on its aim of providing an accessible overview of the history of measurement and many of the issues of meaning. It’s a great starting point for anyone not as immersed (as nerds like me) in the measurement literature.

It’s also an enticing read, with anecdotes aplenty (visiting a nilometer, chatting to Brexiteers about pints and miles) and full of my favourite kind of useful facts. Who knew, for example, that, “In England, measurement disputes in markets were settled by a special tribunal known as the ‘court of piepowder’.” It dates to the 11th century, that is prior to the normal court system, and doled out on the spot justice on market day. Piepowder is apparently a corruption of ‘pieds poudrés’ or dusty feet, characterising travelling merchants.

Perhaps the UK Government will, in its ever-more desperate attempts to show something, anything, for Brexit, re-establish piepowder courts in markets up and down the land, should remoaniac stallholders not want to replace their kilos and metres with pounds and feet. For, as the book illustrates, questions of measurement are not (just) technical, but highly political.


Ultimate price

The weeks are speeding past in a blur of Zooms. However, I’ve found time to read – alongside the rivalry, gore, sex and drama of Tom Hollander’s RubiconUltimate Price: The Value We Place on Life by Howard Steven Friedman.

This is an unpromising subject in some ways: policies and regulations often place an implicit, or sometimes explicit, value on human life. How is that value determined? Is it consistent in different domains? What’s the fair and ethical way to make such judgments, which are inevitable when it comes to deciding which costly safety measures to enforce through regulation or whether to purchase an expensive drug for medical treatments, or how much to compensate crime or accident victims.

The short answers are that the value of life – even the dry-sounding Value of a Statistical Life – is inconsistently established and rarely debated. Monetary values and lives never feel like they belong in the same conversation. Rather, the issue is often the subject of industry lobbying and political horsetrading. Some regulations – eg environmental ones imposing costs on industry – are massively scrutinised. Others – eg extra airline security – are not. Some lives are valued at millions, others – including foreign civilians killed by drones, say – are not valued at all. Cost benefit analysis is sometimes uncomfortable, consistent cost benefit analysis even more so.

I picked up the book with a slight sense of duty as I do one lecture on this area, and can report that it’s very clear and well-informed. It has loads of thought-provoking examples. There is a US focus (health insurance) but the applications are wide. It will make a great supplementary read, giving students loads of examples to get them thinking and plenty of additional references. There isn’t as much as you might think giving an overview of these issues so I welcome this one and will add it to the reading list.


What counts?

After hating the book of the moment, Shoshan Zuboff’s much-praised Surveillance Capitalism, perhaps it underlines my contrariness if I tell you how much I loved my latest read, a book about classification. It was Sorting Things Out by Geoffrey Bowker and Susan Star, quite old now (1999). I can’t remember how I stumbled across it, but it absolutely speaks to my preoccupation with the fact that we see what we count & not the other way around.

The book investigates the confluence of social organisation, ethics and technologies of record-keeping as manifest in the establishment of systems of classification and standards. The examples it uses are medical systems such as diagnostic manuals, but the arguments apply more broadly. The point it makes about the role of record keeping technologies reminded me of a terrific book I read last year, Accounting for Slavery by Caitlin Rosenthal, which explored the role of commercially produced record books in the managerialism of large slave plantations in the US. The argument that a classification system lends the authority of something seemingly tehnocratic to highly political or ethical choices echoes Tom Stapleford’s wonderful book The Cost of Living in America.

As Bowker and Star point out, classification systems shape people’s behaviour. They come to seem like natural rather than constructed objects. They also fix perceptions of social relations, as a classification framework or set of standards, “[M]akes a certain set of discoveries, which validate its own framewor, much more likely than an alternative set outside the framework.” To switch frameworks requires overcoming a bootstrapping problem – you can’t demonstrate that a new one is superior because you don’t yet have the units of data on which it relies. People can’t see what they take for granted until there is an alternative version not taking the same things for granted.

And, although this book was written early in the internet era, the authors note that “Software is frozen organisational and policy discourse” – as we are learning with the burgeoning debate about algorithmic accountability. The essential ambiguity of politics is impossible to embed in code. The big data and AI era will force some of the fudged issues into the open.

[amazon_link asins=’0262522950′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’948e990b-6a96-4ce2-b58b-639d5ff557a3′]