Slightly random reading

I just re-read John McMillan’s Reinventing the Bazaar: A Natural History of Markets, as I keep recommending it to people and thought I’d better check how well it had aged. The answer is pretty well, although if he were to write it now there would surely be more about matching and digital markets. I won’t review it here though, as Laurent Franckx has done so at Goodreads. He concludes:

“I would highly recommend this book to any non-economist with a serious interest in understanding how economists really think about markets, or to economists who are interested in finding more real-world illustrations of the issues they discuss in their academic papers. The book is not very recent (first published in 2003). In the meanwhile, there has been an increased effort from great economists with real scientific credentials (rather than celebrity economists) to write for a general audience. But this book remains rather unique in its scope and balance.”

51NGTZGBDZL._SX310_BO1,204,203,200_Another book that landed here recently and I have skimmed is Neil Monnery’s A Tale of Two Economies: Hong Kong, Cuba and the Two Men Who Shaped Them. Monnery wrote the excellent (2017) Architect of Prosperity about Sir John Cowperthwaite’s shaping of colonial Hong Kong. Who he, you might ask? Indeed, that was part of the interest – an unknown who had laid the foundations for modern Hong Kong’s role as a financial and trading centre. This new book is, intriguingly, a compare and contrast of Cowperthwaite and Che Guevara, or rather an assessment of the natural experiment of two polar opposite economic systems, both intended to deliver the same aim of economic development and lasting prosperity.

51k2HlN0WhL._SX311_BO1,204,203,200_I’ve also read Gertrude Himmelfarb’s On Looking into the Abyss, picked up after the recent news of her death. There were some obits that made me think I should fill this gap in my knowledge. This book is a collection of lectures assessing post-modernism in history (an interesting read alongside the lead letter in the latest issue of the The Point about English literature). It didn’t wow me but I was interested to see how a collection of lectures turned out, as I’m preparing a series of my public lectures over 8 years for a new book next year (to be titled Cogs and Monsters).

Meanwhile, there’s always my recent book, Markets, States and People, to read – very kindly reviewed in the FT by Giles Wilkes. 🙂



Growth, stagnation, and degrowth

There’s a new wave of interest in the degrowth idea, recently summed up in the New Yorker by John Cassidy. The degrowthers are mainly inspired by environmental concerns – how can consumption possibly continue to increase without limit without destroying the planet? – and the article also refers to Vaclav Smil’s recent book Growth, which adds to this seeming common sense the intellectual heft of energy physics and logistic curves.

I have no ideological commitment to the view that measured GDP growth will always revert to 1.5-2%, and found much food for thought in Smil. However, there is a misunderstanding in the degrowth movement about what growth implies for physical material and energy use, well explained by Noah Smith in his recent Bloomberg column. My colleague Dimitri Zenghelis also does an excellent job here of debunking degrowth, arguing it is not the best or only way to be green.

Smith refers to another recent book, Fully Grown: Why a Stagnant Economy is a Sign of Success by Dietrich Vollrath, to make the point that we can probably expect slower growth (Smil’s S-curve is flattening out) but this is very different from degrowth or zero growth.

The basic point is that the degrowth argument doesn’t either acknowledge intangible output growth or explain what somehow needs to be taken away from the economy when there is a new innovationto keep growth below zero. On the first point, think about oral rehydration therapy or mini-aspirin – new uses of existing materials which produce improved health outcomes that people are willing to pay for, whose value far exceeds the materials costs (sugar, salt and water; salicylic acid). On the second, if somebody invents a new item everybody wants to purchase – the way smartphones arrived in 2007, say – then what would we stop them buying to keep total growth at zero? And how?

Prof Vollrath’s book, which I read at the proof stage, is tremendous. He portrays the recent slowdown as an inevitability, the result of economic success. Past gains in health, and lower fertility rates due to reduced infant mortality and higher incomes, explain population ageing in the rich economies. Demography is reducing potential growth. We are on the whole also taking more leisure, with a trend decline in hours worked. Purchases of services are taking over from material goods as a share of expenditure, and productivity growth is slower in the service sector (for familiar, Baumol reasons). These two trends go a long way to explaining reduced growth.

The second half of the book explores other potential reasons for the growth slowdown, such as increased market power (see Thomas Phillippon), inequality (Piketty) or too much government tax and regulation – and sets out the data explaining why none has a big enough effect to explain a lot of the trend slowdown. “I see no obvious reason why the growth rate would accelerate in the near future,” Vollrath concludes.

I really enjoyed Fully Grown, which gave me much food for thought. It also is simply excellent on the data sources, growth accounting, and trends. But I don’t think it tells the whole story about innovation either. Vollrath accepts (as Robert Gordon does not) that there are significant technological advances under way; but he sees these as making production more efficient and thus accelerating the shift to services: an ever-smaller part of the economy is becoming super-efficient.

The catch, I think, is in using real GDP per capita as the sole indicator of growth. It is a conceptually flawed measure for an intangible/services economy. Consider a haircut, a service for which there is at least a volume measure (which many services do not have). If the price of haircuts goes up, real GDP as constructed goes down; but if the price is rising because people are substituting from cheap cuts at Big Jim’s Trims round the corner to expensive cuts in Covent Garden, it actually means that they are purchasing a haircut plus a bundle of quality attributes – lovely salon, free cup of tea, head massage, an hour’s talking therapy from a charming hairdresser….. In some four-fifths of the economy, the Price x Quantity = Revenue equation used to construct the growth statistics does not work. Either we should be quality-adjusting many more purchases (and this has its own problems) or there isn’t even a volume measure (what is a unit of management consultancy??)

Anyway, read Vollrath and Smil, devote energy to cherishing the environment. Read our Benett Institute report out in 10 days on how to take a more rounded view of economic progress, including environmental impact, by considering wealth. But ignore the fashionable lure of degrowth.



How do we know if things have got better?

As someone interested in public policy, from the doing perspective as well as for research and teaching, thinking about social welfare is at the heart of matters for me. What does it mean to think about a government intervention or other collective action ‘making things better’ – for whom, and by how much? I think economists have been pretty poor at addressing these questions, despite past waves of interest in welfare economics – the last of them quite some time ago.

So I pounced on Matthew Adler’s Measuring Social Welfare: An Introduction when my colleague Anna Alexandrova pointed it out to me. The book does a thorough job of setting out, in not too technical a manner, how to apply a social welfare approach to public choices. Adler advocates a ‘welfare-consequentialist’ approach: decisions need to be evaluated according to their outcomes, which depend on the pattern of individual well-being outcomes. He also argues that interpersonal comparisons – and hence rules for ranking these – are essential. The inability to take account of distributional issues, and different marginal utilities of income and consumption at different income levels, makes cost-benefit analysis – ranking policies by summing the monetary equivalents of individuals’ wellbeing –  inadequate. I agree: CBA pretends policy decisions can be non-normative, which is clearly incorrect (and what’s more this pretence has had significant distributional and hence ethical consequences).

Although it spares the reader the algebra of social choice theory, the book is quite theoretical but does include some examples in later chapters, looking at regulations aiming to limit risks (such as health and safety rules or pollution rules). I’m still thinking through what Adlers’ social welfare approach might mean in practice, though. Its approach is a long way from being translatable into a set of policy rules like the Treasury’s Green Book. Nevertheless, this is an important read if economists are going to renew welfare economics.



Tech self-governance

The question of how to govern and regulate new technologies has long interested me, including in the context of a Bennett Institute and Open Data Institute report on the (social welfare) value of data, which we’ll be publishing in a few days’ time. One of the pressing issues in order to crystallise the positive spillovers from data (and so much of the attention in public debate only focuses on the negative spillovers) is the development of trustworthy institutions to handle access rights. We’re going to be doing more work on the governance of technologies, taking a historical perspective – more on that another time.

Anyway, this interest made me delighted to learn – chatting to him at the annual TSE digital conference – that Stephen Maurer had recently published Self-governance in Science: Community-Based Strategies for Managing Dangerous Knowledge. It’s terrifically interesting & I recommend it to anyone interested in this area.

The book looks at two areas, commerce and academic research, in two ways: historical case study examples; and economic theory. There are examples of success and of failure in both commercial and academic worlds, and the economic models summarise the characteristics that explain whether or not self-governance can be sustained.

So for instance in the commercial world, food safety and sustainable fisheries standards have been adopted and largely maintained largely through private governance initiatives and mechanisms, synthetic biology much less so, having an alphabet soup of competing standards. Competitive markets are not well able to sustain private standards, Maurer suggests: “Competitive markets can only address problems where society has previously addressed some price tag to the issue.” Externalities do not carry these price tags. Hence supply chains with anchor firms are better able to bear the costs of compliance with standards – the big purchasing firm can require its suppliers to adhere.

Similarly, in the case of academic science the issue is whether there are viable mechanisms to force dissenting minorities to adhere to standards such as moratoria on certain kinds of research. The case studies suggest it is actually harder to bring about self-governance in scientific research as there are weaker sanctions than the financial ones at play in the commercial world. Success hinges on the community having a high level of mutual trust, and sometimes on the threat of formal government regulation. The book offers some useful strategies for scientific self-governance such as building coalitions of the willing over time (small p politics), and co-opting the editors of significant journals – as the race to publish first is so often the reason for the failure of research moratoria to last.

The one element I thought was largely missing from the analytical sections was the extent to which the character of the technologies or goods themselves affect the likelihood of successful self-governance. This is one aspect that has come up in our preparatory work – the cost and accessibility of different production technologies. The analysis here focuses on the costs of implementing standards, and on monitoring and enforcement.

This is a fascinating book, including the case studies, which range from atomic physics to fair trade coffee. It isn’t intended to be a practical guide (& the title is hardly the airport bookstore variety) but anybody interested in raising standards in supply chains or finding ways to manage the deployment of new technologies will find a lot of useful insights here.



Humans in the machine

There’s some very interesting insight into the human workforce making the digital platforms work in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddarth Suri. The book as a whole doesn’t quite cohere, though, nor deliver on the promise of the subtitle. The bulk of the book draws on interviews and surveys of people who work via platforms like Amazon’s famous Mechanical Turk, but also the internal Microsoft equivalent, UHRS, and a smaller social enterprise version, Amara.

This is all extremely interesting, about how people work – in the US and Bangalore – their tactics for making money, dealing with stress, how many hours they have to work and when, how much or little agency they have, and so on. Not least, it reminds or informs readers that a lot of AI is based on the labelling done by humans to create training data sets. However, not all the ghost work described is of this kind and some, indeed, has little to do with Silicon Valley except that a digital platform mediates the employer and the seeker of work. As the authors note, this latter type is a continuation of the history of automation, the role of new pools of cheap labour in industrial capitalism, and the division of labour markets into privileged insiders and contingent – badly paid, insecure – outsiders. The new global underclass is just one step up from the old global underclass; at least they have a smartphone or computer and internet access.81uywR4bPoL._AC_UY218_ML3_The survey results confirm that some of the digital ghost workers value the flexibility they get reasonably highly – although with quite a high variance in the distribution. Not surprisingly, those with least pressing need for income most value the flexibility. Some of the women workers in India also valued the connection to the labour market when they were unable to work outside of their home because of childcare or family expectations. Similarly, with the Amara platform, “Workers can make ghost work a navigable path out of challenging circumstances, meeting a basic need for autonomy and independence that is necessary for pursuing other interests, bigger than money.”

The book’s recommendations boil down to recommending that platforms should introduce double bottom line accounting – in other words, find a social conscience alongside their desire for profit. Without a discussion of their (lack of) incentives to do so, this is a bit thin. Still, the book is well worth reading for fascinating anthropological insights from the field work, and for the reminder about the humans in the machine.