“Facts alone are wanted in life”

One of the great inventions of the Enlightenment and capitalism – perhaps one of the lesser-known ones – was statistics. The accumulation of facts, represented by numbers, was taken a mark of progress, along with the presumption that an aggregate number such as a mean could be used to study something inherently variable, including the behaviour of individuals in society. This is the argument of Theodore Porter’s 1986 book [amazon_link id=”069102409X” target=”_blank” ]The Rise of Statistical Thinking, 1820-1900[/amazon_link]. Yes, I’ve been sucked into the history of statistics.

[amazon_image id=”069102409X” link=”true” target=”_blank” size=”medium” ]The Rise of Statistical Thinking, 1820-1900[/amazon_image]

Porter writes: “The pre-numerate age was not entirely deprived of statistical tables, but the great explosion of numbers that made the term ‘statistics’ indispensable occurred during the 1820s and 1830s. The demands it placed on people to classify things so that they could be counted and placed in an appropriate box in some official table, and more generally on the character of the information people need to possess before they feel they understand something, are of the greatest importance.” (p11)

The collection of social statistics was also a tool in the centralization and bureaucratisation of government. Early ‘statists’ hoped to bypass traditional authorities such as church and monarch; but the effect of collecting orderly data on society was to consolidate state power, the book argues. (It is only now that we can think about the potential for citizen statistics.) However, the enthusiasm for statistics was manifested by pragmatic reformers, who “believed that the confusion of politics could be replaced by an orderly reign of facts.” (p27) This is still the dream of technocrats, and still disappointed by every election campaign.

In 19th century Britain, statistical enthusiasm took shape in private societies, principally the Statistical Societies of London (forerunner of the Royal Statistical Society, and with Malthus and Babbage among its founders) and Manchester (still thriving). The book draws an interesting connection between the emerging idea of social laws, statistical regularities unaffected by individual choices, and laissez faire liberalism, which reached its apogee in the 1850s. Government was seen as a hindrance to ‘natural’ social progress, obstructing the course of history toward prosperity and freedom. Interestingly, the idea of statistical regularities in physics, such as James Clerk Maxwell’s work on gases, was borrowed from the observation of social regularities.

And the opponents of statistics (many of them French positivists such as Comte) rejected the key novelty of statistical thinking, the idea that individual unpredictability would cancel out: “Any social science that views the differences among individuals as random, they argued, is irremediably flawed. …One must analyze carefully in order to establish causes and recognize their heterogeneous effects on different parts of the population.” (p152) There were medical opponents too, who said statistical generalizations were useless because they said nothing about the individual patient – something anybody presented with a diagnosis and a population frequency will identify with.

Fascinating. [amazon_link id=”0199536279″ target=”_blank” ]Mr Gradgrind’s[/amazon_link] insistence on Facts (“Now, what I want is, Facts. Teach these boys and girls nothing but Facts. Facts alone are wanted in life. Plant nothing else, and root out everything else. You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.”) is both political and performative, and not boring at all.

[amazon_image id=”150567817X” link=”true” target=”_blank” size=”medium” ]Hard Times[/amazon_image]

Recent robot round-up

I’m looking forward to reading Martin Ford’s [amazon_link id=”0465059996″ target=”_blank” ]The Rise of the Robots[/amazon_link] – it gets a good review in the FT today. Edward Luce calls it “well researched and disturbingly persuasive.”

[amazon_image id=”0465059996″ link=”true” target=”_blank” size=”medium” ]Rise of the Robots: Technology and the Threat of a Jobless Future[/amazon_image]

I’m still a robo-sceptic in the sense of thinking there is nothing inevitable about the employment and income distribution outcomes of skill-biased automation. It’s technological determinism to think otherwise, as the underlying technological waves are channelled through economic and political institutions. That’s not to say we shouldn’t be concerned. After all, there was a wave of automation in manufacturing in the late 1970s/early 1980s and the social consequences of that were devastating – the institutions handled the transition very badly.

There is an interesting recent (free) e-book collection of essays (including one of mine) from the IPPR, Technology, Globalization and the Future of Work. Also this recent paper, Robots at Work, by Georg Graetz and Guy Michaels. They find in a panel of data across industry in 17 countries, robotization increased total factor productivity and wages, although with some adverse effects on hours worked by low-skilled workers.

The largeness of small errors

I enjoyed Oskar Morgenstern’s trenchant observations about the (in)accuracy of economic statistics in [amazon_link id=”0691003513″ target=”_blank” ]On The Accuracy of Economic Observations[/amazon_link]. Here are a few more examples:

[amazon_image id=”0691003513″ link=”true” target=”_blank” size=”medium” ]On Accuracy of Economic Observations[/amazon_image]

“The idea that as complex a phenomenon as the change in a ‘price level’, itself a heroic theoretical abstraction, could at present be measured to such a degree of accuracy [a tenth of one percent] is simply absurd.”

“It behooves us to pause in order to see what even a 5 percent difference in national income means. Taking the US and assuming a Gross National Product of about 550 billion dollars, this error equals + or – 30 billion dollars. This is more than twice the best annual sales of General Motors…. It is far more than the total annual production of the entire electronics industry in the United States.”

(Updating and relocating this exercise, a 5% error in the £1.7 trillion GDP of the UK would be almost the same size as the entire UK motor industry including the supply chain, more than the total profits of the financial services sector, or about the same as households spend in total on food and drink.)

The errors don’t get the attention they deserve, Morgenstern writes: “Instead, in Great Britain as in the United States and elsewhere, national income statistics are still being taken at their face value and interpreted as if their accuracy compared favourably with that of the measurement of the speed of light.” And he points out that arithmetically, when you are looking at growth rates of figures each measured with some error, even proportionately small errors in the levels turn into large errors in the rate of change. He gives an arithmetical example, of a ‘true’ growth rate of 1.8% being measured as somewhere between -7.9% and +12.5% for measurement errors of up to 5% in the two levels.

It’s interesting that every economist and statistician would acknowledge the errors problem and yet virtually all ignore it. We’ve invested so much that to admit great uncertainty would undermine the totemic value of the figures and the ritual pronouncements about them. At a talk I did at IFN in Stockholm yesterday about [amazon_link id=”0691169853″ target=”_blank” ]GDP[/amazon_link], one of the respondents, Karolina Ekholm, State Secretary at the Ministry of Finance, said it made her uneasy that key policy decisions such as cutting government spending depended so much on the output gap – the difference between two imaginary and uncertain numbers. Of course we have to try to measure, and how marvellous it would be if the official statisticians got some extra resources to improve the accuracy in the raw data collection, and yet I think she’s right to be uneasy.

Next on my reading pile: [amazon_link id=”B00SLUQ5HS” target=”_blank” ]The Politics of Large Numbers: A History of Statistical Reasoning[/amazon_link] by Alain Desrosières and [amazon_link id=”069102409X” target=”_blank” ]The Rise of Statistical Thinking 1820-1900[/amazon_link] Theodore Porter.

[amazon_image id=”B00SLUQ5HS” link=”true” target=”_blank” size=”medium” ]The Politics of Large Numbers: A History of Statistical Reasoning: Written by Alain Desrosieres, 2002 Edition, (New Ed) Publisher: Harvard University Press [Paperback][/amazon_image]  [amazon_image id=”069102409X” link=”true” target=”_blank” size=”medium” ]The Rise of Statistical Thinking, 1820-1900[/amazon_image]

Why haven’t economic statistics improved?

Party animal that I am, I’ve been spending my spare time in Stockholm this week (where I’ve been doing a couple of events for the Institutet for Naringslivsforskning) reading Oskar Morgenstern’s [amazon_link id=”B00177CAI0″ target=”_blank” ]On The Accuracy of Economic Observations[/amazon_link]. The 1963 edition of this 1950 book was just sent to me by a friend who had read a draft paper of mine that will be out quite soon, which refers to the book in a quotation from a recent paper by Charles Manski.

[amazon_image id=”0691003513″ link=”true” target=”_blank” size=”medium” ]On Accuracy of Economic Observations[/amazon_image]

As so often when you start researching something, it turns out there’s nothing completely new. Morgenstern wrote trenchantly about some of the things that have been bothering me about the way economists use economic statistics. For example: “A significant difference between the use of data in the natural and social sciences is that in the former the producer of observations is usually also their user.” Economic data on the other hand are collected by many hands, often consist of time series, and are processed by statisticians. Economists rarely think hard enough about their numbers – a problem made all the worse by the easy availability of statistics to download from handy websites and pour into a software package that will churn through regressions and generate test statistics.

Morgenstern also notes the strong incentives many ‘creators’ of economic data have to give misleading responses to survey questions. What’s your income? What price do you charge for this service, oh oligopoly provider? What is the level of your GDP, oh Greek government? “‘Strategic’ considerations play havoc with reliability.” Even if all respondents are well-intentioned, the sampling error, general mistakes in data collection and processing and so on mean we place a ludicrous amount of confidence in any and all economic statistics. “Three or four digits is probably the maximum accuracy of primary data that ever needs to be considered in the vast majority of economic arguments,” he wrote.

Gratifyingly, he likes the technique of spectral analysis of time series, which I did a lot of in my PhD days, because they arrange the data in frequency bands. “There can hardly be any doubt that the powerful new techniques of spectral analysis will put the study of economic fluctuations on a new basis,” the book says. Alas, it didn’t catch on except among a small number of time series anoraks. The chapter I’ve just finished concludes that economics still needs to learn to live with the fundamental importance of errors in the data, and include them in its theories – just as modern physics has. Alas, this too has not caught on among economists. All of which raise the question – why not? Why haven’t economic statistics improved, it seems, since 1960?

My favourite chart at the moment is the Bank of England’s GDP forecast in the Inflation Report. February’s showed a 90% chance that UK annual GDP growth was between 1% and 5% at the start of 2015, taking into account simply past experience of revisions of the data, and not any of the additional potential sources of error. This is the difference between the standard of living doubling in 70 years and 14 years, or in other words vast. Something to bear in mind for anyone making any last minute decisions about how to vote today on the basis of parties’ economic claims.

It’s the power, stupid

What’s not to like about a book with the title [amazon_link id=”1612193749″ target=”_blank” ]The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureacracy[/amazon_link]. This is a book of essays by Occup Wall Street activist and LSE anthropologist David Graeber, author of the tome [amazon_link id=”1612194192″ target=”_blank” ]Debt: The First 5000 Years[/amazon_link]. Well, various things. Just like his previous book, I enjoyed the read, strongly disagreed with some points and strongly agreed with others. That makes for quite good entertainment.

[amazon_image id=”B00MKZ0QZ2″ link=”true” target=”_blank” size=”medium” ]The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy[/amazon_image]

The book is a series of essays, ostensibly about bureaucracy, but really about the iniquities of neoliberal capitalism, which is not at all an arena of individual liberty but on the contrary a highly circumscribed system of exploitation. ‘Bureaucracy’ does service as a catch-all term for the set of rules through which the oligarchy in cahoots with the government restrict the options of the great mass of people.

There is certainly something in this portrayal of a society of rich people and large organisations able to act as they please while the great majority experience a highly regulated life. Think about wanting to open a small cafe or a two-person plumbing business, and the extent of certification and inspections that are needed. Or opening a bank account. Graeber writes that we spend hours entangled in paperwork to apply for licences or pay taxes or open accounts, and: “The paperwork we do exists in this sort of in between zone – ostensibly private, but in fact entirely shaped by a government that provides the legal framework, underpins the rules with its courts and all the elaborate mechanisms of enforcement that come with them but – crucially – works closely with the private concerns to ensure the results will guarantee a certain rate of provate profit.” All the more so in these days of outsourcing of state functions such as determining eligibility for benefits. There has been, he says, a gradual fusion of public and private power into “a single entity, rife with rules and regulations.”

So I think there is something in this. Matt Taibbi was onto the same point in [amazon_link id=”0385529961″ target=”_blank” ]Griftopia[/amazon_link] when he argued that all the people who seem crazy to support Tea Party policies actually do have an arbitrary and unfriendly government controlling their lives.

Where I strongly disagree with Graeber is in his analysis of how to respond. The good part of his argument is about experimenting with other kinds of association and collective organisation, not government but not for profit, perhaps idealistic but worth trying. However, he also writes: “Do not underestimate the importance of sheer physical violence,” drawing on the experience of protests like the 1999 Battle for Seattle riots against the WTO.

He also has a long excursion into technology, arguing that the pace of change peaked in the 1950s or 60s, and that current technological change is not delivering things people might actually want such as flying jet packs. The high hopes of the 1960s as manifested in Star Trek, say, have not materialized. No beaming up, no holodecks etc. Instead, technology has been diverted to military causes. Apart from the fact that the military have always influenced technological research, this is a fact-free essay. I’d want to see some kind of evidence, because if you look at things like either price declines (or how many hours of work are needed to purchase a unit of computer power, or light), or the character of post-1960s discoveries  (medical advances, compelling technologies such as mobile telephony) then there seems to me to be decent evidence for no slowdown. Why can’t computers think yet? Well, give them time. Economic historians like Paul David and David Landes have pointed out how long and variable and unpredictable are the lags between discoveries and their economic and social impact. The dreams and fears about electricity probably peaked more than 60 years before its use had become widespread.

Still, disagreements don’t really matter with a book like this. It’s enjoyable to read something thought provoking with lots of interesting material. I enjoyed the Star Trek riff (“The Federation is Leninism brought to its future absolute cosmic success… a happy conjuncture of material abundance and ideological conformity”) and the potted history of the German Postal system of the 19th century and its influence on future ideal societies, and his review of the third Batman movie and its political interpretation. This book is also less than half the length of [amazon_link id=”1612194192″ target=”_blank” ]Debt[/amazon_link], Bureaucracy having a shorter history I suppose. Although actually the one thing this isn’t really about is bureaucracy, but instead another Graeber take on power.