Growth, stagnation, and degrowth

There’s a new wave of interest in the degrowth idea, recently summed up in the New Yorker by John Cassidy. The degrowthers are mainly inspired by environmental concerns – how can consumption possibly continue to increase without limit without destroying the planet? – and the article also refers to Vaclav Smil’s recent book Growth, which adds to this seeming common sense the intellectual heft of energy physics and logistic curves.

I have no ideological commitment to the view that measured GDP growth will always revert to 1.5-2%, and found much food for thought in Smil. However, there is a misunderstanding in the degrowth movement about what growth implies for physical material and energy use, well explained by Noah Smith in his recent Bloomberg column. My colleague Dimitri Zenghelis also does an excellent job here of debunking degrowth, arguing it is not the best or only way to be green.

Smith refers to another recent book, Fully Grown: Why a Stagnant Economy is a Sign of Success by Dietrich Vollrath, to make the point that we can probably expect slower growth (Smil’s S-curve is flattening out) but this is very different from degrowth or zero growth.

The basic point is that the degrowth argument doesn’t either acknowledge intangible output growth or explain what somehow needs to be taken away from the economy when there is a new innovationto keep growth below zero. On the first point, think about oral rehydration therapy or mini-aspirin – new uses of existing materials which produce improved health outcomes that people are willing to pay for, whose value far exceeds the materials costs (sugar, salt and water; salicylic acid). On the second, if somebody invents a new item everybody wants to purchase – the way smartphones arrived in 2007, say – then what would we stop them buying to keep total growth at zero? And how?

Prof Vollrath’s book, which I read at the proof stage, is tremendous. He portrays the recent slowdown as an inevitability, the result of economic success. Past gains in health, and lower fertility rates due to reduced infant mortality and higher incomes, explain population ageing in the rich economies. Demography is reducing potential growth. We are on the whole also taking more leisure, with a trend decline in hours worked. Purchases of services are taking over from material goods as a share of expenditure, and productivity growth is slower in the service sector (for familiar, Baumol reasons). These two trends go a long way to explaining reduced growth.

The second half of the book explores other potential reasons for the growth slowdown, such as increased market power (see Thomas Phillippon), inequality (Piketty) or too much government tax and regulation – and sets out the data explaining why none has a big enough effect to explain a lot of the trend slowdown. “I see no obvious reason why the growth rate would accelerate in the near future,” Vollrath concludes.

I really enjoyed Fully Grown, which gave me much food for thought. It also is simply excellent on the data sources, growth accounting, and trends. But I don’t think it tells the whole story about innovation either. Vollrath accepts (as Robert Gordon does not) that there are significant technological advances under way; but he sees these as making production more efficient and thus accelerating the shift to services: an ever-smaller part of the economy is becoming super-efficient.

The catch, I think, is in using real GDP per capita as the sole indicator of growth. It is a conceptually flawed measure for an intangible/services economy. Consider a haircut, a service for which there is at least a volume measure (which many services do not have). If the price of haircuts goes up, real GDP as constructed goes down; but if the price is rising because people are substituting from cheap cuts at Big Jim’s Trims round the corner to expensive cuts in Covent Garden, it actually means that they are purchasing a haircut plus a bundle of quality attributes – lovely salon, free cup of tea, head massage, an hour’s talking therapy from a charming hairdresser….. In some four-fifths of the economy, the Price x Quantity = Revenue equation used to construct the growth statistics does not work. Either we should be quality-adjusting many more purchases (and this has its own problems) or there isn’t even a volume measure (what is a unit of management consultancy??)

Anyway, read Vollrath and Smil, devote energy to cherishing the environment. Read our Benett Institute report out in 10 days on how to take a more rounded view of economic progress, including environmental impact, by considering wealth. But ignore the fashionable lure of degrowth.

41utBzmOnNL._SX329_BO1,204,203,200_

Share

Vision and serendipity

As the year hurtles toward its end, and what looks sure to be a tumultuous 2019, I’ve been retreating under the duvet with Mitchell Waldrop’s The Dream Machine, published in a handsome edition by Stripe Press. The book is a history of the early years of the computer industry in the US, centred around JCR Licklider and his vision of human-computer symbiosis.

It has therefore quite a narrow focus, being a detailed history of the people involved in a small slice of the effort that went into creating today’s connected, online world. Licklider played a decisive role at DARPA in prompting and funding the creation of the Arpanet and hence ultimately the Internet. I got quite caught up in the detail – the triumphs and setbacks of particular researchers, their job moves, who fell out with whom, and so on. (Better than the painful minutiae of our Brexit humiliation, for sure.)

One of the striking aspects of the tale is how serendipitous the outcomes were. There are some popular Whig interpretations of digital innovation, as if the creation of the personal computer, GUI, Internet etc were purposive. It wasn’t like that at all. Licklider for sure had a vision. It might or might not have worked. It was sort of chance that he ended up in DARPA with his hands on a suitable budget to fund the networking. It certainly wasn’t an intentional US government industrial strategy, as some accounts would have it. The Dream Machine was a Heath Robinson contraption. There are lessons in such histories both for scholars of innovation and for would-be industrial strategists.

Share

Who benefits from research and innovation?

I’ve been pondering a report written by my friend and Industrial Strategy Commission colleague Richard Jones (with James Wilsdon), The Biomedical Bubble. The report calls for a rethinking of the high priority given to biomedical research in the allocation of research funding, and arguing for more attention to be paid to the “social, environmental, digital and behavioural determinants of health”. It also calls for health innovation to be considered in the context of industrial strategy – after all, in the NHS the UK has a unique potential market for healthcare innovations. It points out the there are fewer ill people in the places where most biomedical and pharmaceutical research is carried out, thanks the the UK’sregional imbalances. It also points out that, despite all the brilliant past discoveries, the sector’s productivity is declining:

“In the 1960s, by some measures a golden age of drug discovery, developing a successful
drug cost US$100 million on average. Since then, the number of new drugs developed per
billion (inflation adjusted) dollars has halved every nine years. Around 2000, the cost per
new drug passed the US$1 billion dollar milestone, and R&D productivity has since fallen
for another decade.”

All of this seems well worth debating, for all its provocation to the status quo – and this is a courageous argument given how warm and cuddly we all feel about new medicines. I firmly believe more attention should be paid to the whole system from basic research to final use that determines the distribution of the benefits of innovation, rather than – as we do now – treating the direction of research and innovation as somehow exogenous and worrying about the distributional consequences. This goes for digital, or finance, say, as well as pharma. What determines whether there are widely-shared benefits – or not?

Serendipitously, I happened to read a couple of related articles in the past few days, although both concerning the US. One was this BLS report on multi-factor productivity, which highlights pharma as a sectors making one of the biggest contributions to the US productivity slowdown (see figure 3). And this very interesting Aeon essay about the impact of financial incentives on US pharma research. It speaks to my interest in understanding the whole system effects of research in this domain. Given that this landscape in terms of both research and commerce is US-dominated, this surely makes the question of how the UK spends its own research money all the more relevant? As The Biomedical Bubble asks:

“[T]he importance of the biotechnology sector has been an article of faith for UK
governments for more than 20 years, even when any notion of industrial strategy in other
sectors was derided. So the failure of the UK to develop a thriving biotechnology sector
at anything like the scale anticipated should prompt reflection on our assumptions about
how technology transfer from the science base occurs. The most dominant of these is that
biomedical science would be brought to market through IP-based, venture capital funded
spin-outs. This approach has largely failed, and we are yet to find an alternative.”
For it seems the model is no longer serving the US all that well either – not economy-wide innovation and productivity, and not the American population, which has worth health outcomes at higher cost that any other developed economy. There are some challenging questions here, fundamentally: who benefits from research and innovation, how should the public good being funded by taxpayers be defined and assessed, and what funding and regulatory structures would actually ensure the gains are widely shared?
Share

Finance, the state and innovation

Yesterday brought the launch of a new and revised edition of Doing Capitalism in the Innovation Economy by William Janeway. Anybody who read the first (2012) edition will recall the theme of the ‘three player game’ – market innovators, speculators and the state – informed by Keynes and Minsky as well as Janeway’s own experience combining an economics PhD with his experience shaping the world of venture capital investment.

The term refers to how the complicated interactions between government, providers of finance and capitalists drive technological innovation and economic growth. The overlapping institutions create an inherently fragile system, the book argues – and also a contingent one. Things can easily turn out differently.

The book starts with a more descriptive first half, including Janeway’s “Cash and Control” approach to investing in new technologies, and also an account of how the three players in the US shaped the computer revolution. This is an admirably clear but nuanced history emphasising the important role of the state – through defense spending in particular – but also the equally vital private sector involvement. I find this sense of the complicated and path dependent interplay far more persuasive than simplistic accounts emphasising either the government or the market.

The second half of the book takes an analytical turn, covering financial instability, and the role of state action. It’s fair to say Janeway is not a fan of much of mainstream economic theory (at least macro and financial economics). He includes a good deal of economic history, and Carlota Perez features alongside Minsky in this account.

The years between the two editions of the book, characterised by sluggish growth, flatlining productivity, and also extraordinary changes in the economy and society brought about by technology perhaps underline the reasons for this lack of esteem. After all, there do seem to be some intractable ‘puzzles’, and meanwhile, just in time for publication, Italy looks like it might be kciking off the Euro/banking crisis again. The experience of the past few years also helps explain the rationale for a second edition. That’s quite a lot of economic history and structural change packed into half a decade.

Although I read the first edition, I’m enjoying the second as well. And for those who didn’t read the book first time around, there’s a treat in store.

Share

Epidemics vs information cascades

As I was looking at publishers’ websites for my New Year round up of forthcoming books, I noticed OUP billing Paul Geroski’s The Evolution of New Markets as due out in January 2018. This is odd as it was published in 2003, and Paul died in 2005; it isn’t obvious why there’s a reprint now. He was Deputy Chairman and then Chairman of the Competition Commission during my years there, and was a brilliant economist as well as a wonderful person. I learnt an amazing amount from being a member of his inquiry team.

Anyway, the catalogue entry for the reprint sent me back to my copy of the book, along with Fast Second, which Paul co-authored with Constantinos Markides. Fast Second challenges the received wisdom of first mover advantage: Amazon was not the first online books retailer, Charles Schwab not the first online brokerage, and so on. The opportunity lies in between being early in a radical new market and being late because a dominant design and business model have already emerged. The Fast Second business competes for dominance – and supposed first mover advantages are usually fast second advantages.

Paul’s book The Evolution of New Markets – in which I found a handwritten note he’d sent me with it, which made for an emotional moment – does what it says, and explores models of innovation diffusion – so in other words, models of S-curves. His view was that the epidemic model of S-curves, which seems to be the standard one, was a misleading metaphor. He argued that information cascades best fit the observed reality. The epidemic model assumes that a new technology is adopted as information about it is diffused. Each user passes on the info to the next user. However, as the book points out, information diffuses far faster than use. Users need to be persuaded rather than just informed.

More appropriate is a model whereby new technologies arrive in a number of variants at slightly different times, making adoption risky and costly – especially when there are network externalities or when people realize there is going to be a standards battle. Most new products fail, after all. But once one variant starts to dominate, the cost and risk decline and adoption will occur much faster.

It’s a persuasive argument, and a very readable book. Although the list price is surprisingly high for a short paperback, one can be confident second hand copies are just as good.

 

 

Share