Banks versus business

The latest in my catch-up reading has been British Business Banking: The Failure of Finance Provision for SMEs by Michael Lloyd. This is obviously a bit niche but of great interest as the vacuum in finance for growing businesses has often been identified as one of the reasons for the UK’s weakness in translating an excellent research base into lasting commercial success and eventually productivity gains. The stock of bank lending to SMEs in the UK was £166bn at the end of 2018, according to an OECD survey. (It will perforce have increased signigicantly in 2020.) If this sounds a lot, it was only about one tenth the stock of their residential mortgages. The UK banking sector just doesn’t do much business lending.

I’ve long thought lack of competition is a key part of the story. The commercial banking sector has consolidated steadily over the decades – I’m old enough to remember some of those swallowed up, like Williams and Glyns and National Provincial. In this book Michael Lloyd argues that while the development of an oligopoly might have been part of the cause of the SME finance gap, introducing more competition won’t be part of the cure now. There has been some new entry such as Santander, but the newcomers are not interested in the SME sector either.

He anyway sees the gap as a quasi-cultural one, linked to the “free market” philosophy embraced more eagerly in the UK even than in the US, and in the centralisation of banking decisions. He advocates a restoration of relationship banking spearheaded by a state Investment Bank. What we are getting instead is a National Infrastructure Bank – needed, but unlikely to do a lot for SMEs around the country. However, I find the relationship argument persuasive: I’d see it in terms of a vast loss of information that has come about through bank mergers and centralisation. Automated decisions are based on too little information, whereas old-fashioned bankers in boots would have a wealth of information about local SMEs.

It’s a hard problem to solve even with a government willing to have a go. Still, this is an interesting book for those worrying at the issue, well worth a read.

51BZBixw1DL._SX331_BO1,204,203,200_

Share

Baumol meets Marx

I read Jason Smith’s Smart Machines and Service Work: Automation in an Age of Stagnation because there was a positive discussion of it on Twitter. I’d describe it as a mash-up of Baumol (‘cost disease’) and Marx (‘exploitation’).

The first part of the book is a rant about technology and why today’s tech will not increase productivity. It channels Robert Gordon and criticises economists like Erik Brynjolfsson (or before him Paul David) for arguing there are delays between innovation and the productivity effects they produce.

I have the same problem with this as with Gordon’s magnum opus: it might turn out to be correct that today’s techs have no productivity impact, but focusing only on digital entertainment and communication devices is completely unpersuasive. Vaccines, hello? The wave of biomedical innovation like the development of mRNA vaccines has rested on the plunging cost of gene sequencing, enabled by computation applied to massive amounts of data. Lab benches, test tubes, and also computers. The transition to green energy supply will require large-scale computation to manage storage, networks and grids. Additive manufacturing has many potential applications including printing organs and tissues. These applications are genuinely slow to emerge: large additional investments in equipment are needed, the organisational and ethical hurdles are high, other discoveries might be required to make them economically viable. We’re lucky so much of the prior mRNA research had been done before 2020.

Anyway, the book halfway through then turns to the growth of the service sector, the automation of routine tasks, and the debate about the potential impact on jobs. It looks back, too, at the well-known decline in middle-income jobs and growth of the contingent workforce. Having introduced Baumol’s familiar ‘cost disease’, it then turns to a Marxist analysis. Having never learned Marxist economics I found this quite interesting but heavy going, as it has its own jargon. Still, it is surely right to consider the impact of automation in the context of power struggles, or class conflict.

The book has some sections where it pauses to ask what is actually meant by ‘productivity’, a question of evergreen interest to me. It touches here on the issue of time use and time saving in services, and on activities crossing the production boundary, making it hard to measure ‘true’ productivity. As it points out, many previously household (uncounted) activities became marketed during the 20th century (‘commoditised’), and are often low-pay and precarious. However, the book then veers back to the more abstract class struggle.

All in all, I found the book quite interesting for its novel (to me) perspective, and it is well written. But much of the (non-Marxist) economic literature it draws on will be familiar to many people enticed by the subject matter. What it adds to the technology debate is, quite rightly, the issues of power and deregulation of the labour market,  beyond discussions of gig platforms. But it didn’t tell me anything new about the productivity puzzle.

41Ed2p+4mpL._SX317_BO1,204,203,200_

Share

Have we run out of innovations?

I’ve been reading old articles about about hedonic adjustment and followed one trail to a 1983 paper by William Nordhaus about the productivity slowdown between the 1960s and 1970s. He wrote: “Is it not likely that we have temporarily exhausted many of the avenues that were leading to rapid technological change?” (1981 working paper version here). Timothy Bresnahan and Robert Gordon pounce on this in their introduction to the 1996 NBER volume they edited on The Economics of New Goods: “The world is running out of new ideas just as Texas has run out of oil. Most new goods now, compared with those of a century ago, are not founding whole new product categories or meeting whole new classes of needs.” (Apropos of Texan oil, see this: Mammoth Texas oil discovery biggest ever in USA, November 2016.)

Gordon has, of course, doubled down on this argument in his magisterial The Rise and Fall of American Growth. (It is btw a great holiday read – curl up under a blanket for a couple of days.)

This reminded me I’d seen this post by Alex Tabarrok at Marginal Revolution:  A Very Depressing Paper on the Great Stagnation.

I haven’t yet read the paper it refers to, nor the earlier Jones one, and will do of course. It’s just that it seems we’ve been running out of ideas for over 30 years. I’ll say nothing about sequencing the genome and the consequent medical advances, new materials such as graphene, advances in photovoltaics, 3G/wifi/smartphones, not to mention current progress in AI, robotics, electric cars, interplanetary exploration. Oh, and chocolate HobNobs, introduced in 1987. Excellent for productivity.

For the time being, I’m going to stick with the hypothesis that we haven’t run out of ideas.

Share

It’s what happens after innovation that matters for productivity

Having been guiltily reading a thriller or two, as well as David Olusoga’s Black and British, this is a brief post about an economics paper I’ve read, Paul David on Zvi Griliches and the Economics of Technology Diffusion. (Zvi was one of my econometrics teachers at Harvard, a very nice man who was still so obviously brilliant that he was a bit scary. He would ask a question which might be completely straightforward but one would have to scrutinise it carefully before answering, just in case.) Anyway, the Paul David paper is a terrific synopsis of three areas of work which are implicitly linked: how technologies diffuse in use; lags in investment, as new technologies are embodied in capital equipment or production processes; and multifactor productivity growth.

As David writes here: “The political economy of growth policy has promoted excessive attention to innovation as a determinant of technological change and productivity growth, to the neglect of attention to the role of conditions affecting access to knowledge of innovations and their actual introduction into use. The theoretical framework of aggregate production function analysis, whether in its early formulation or in the more recent genre of endogenous growth models, has simply reinforced that tendency.” He of course has been digging away at the introduction into use of technologies since before his brilliant 1989  ‘The Dynamo and the Computer‘. Another important point he makes here is that there has been little attention paid to collecting the microdata that would permit deeper study of diffusion processes, not least because the incentives in academic economics do not reward the careful assembly of datasets.

By coincidence, the paper concludes with a description of a virtuous circle in innovation whereby positive feedback to revenues and profits from a successful innovation lead to both learning about what customers value and further investment in R&D. Here is the diagram from the paper.

diagThis was exactly the argument made yesterday at a Bank of England seminar I attended by Hal Varian (now chief economist at Google, known to all economics students as author of Microeconomic Analysis and Intermediate Microeconomics, and also with Carl Shapiro of Information Rules, still one of the best texts on digital economics). Varian argued there are three sources of positive feedback: demand side economies of scale (network effects), classic supply side economies of scale arising often from high fixed costs, and learning-by-doing. He wanted to make the case that there are no competition issues for Google, and so suggested that (a) search engines are not characterised by indirect network effects because search users don’t care how many advertisers are present; (b) fixed costs have vanished – even for Google-sized companies – because the cloud; (c) experience is a good thing, not a competitive barrier, and anyway becomes irrelevant when a technological jump causes an upset, as in Facebook toppling MySpace. I don’t think his audience shed its polite scepticism. Still, the learning-by-doing as a positive feedback mechanism argument is interesting.

Share

A land built by economists?

Last week I took part in a workshop organised by the National Infrastructure Commission on the economics of infrastructure and growth. It was fascinating, and particularly for illuminating a dilemma for economists thinking about the newly-prominent issue of infrastructure investment. It’s a Good Thing – but how much is needed, and which investments? How should the NIC advise on the choices most likely to increase economic welfare and growth?

There is some well-understood machinery for answering such questions, in the form of appraisals (or post hoc evaluations) using cost-benefit analysis. The trouble is that although the technique, firmly embedded in policy advice, is useful for assessing relatively small changes, it is next to useless in the context of big investment projects that involve externalities such as environmental costs and benefits or network effects, might change people’s behaviour significantly, or might have non-linear impacts which accelerate beyond a point of critical mass. These are, of course, situations that might often arise with big infrastructure projects. And the challenge is all the greater because different kinds of infrastructure will affect each other (transport and communications networks will be complementary, natural capital and flood defence schemes will interact). To cap it all, there is an economic geography dimension to this, and infrastructure will affect the distribution of economic activity over space, which will also affect the distribution of economic opportunities and incomes.

So these questions are difficult, and nobody thinks economics can answer them. What was interesting about the discussion and subsequent emails was the emergence of a basic dilemma. One of the strengths of the conventional economic approach is the intellectual discipline it enforces. Cost benefit analysis looks at the direct benefits of a project to users, and converts them into a single unit of measurement, money (although it could be owls, or happy faces). It is a powerful brake on wishful thinking.

Economics also sets out the circumstances in which wider benefits might need taking into account: when there is good reason to believe that resources are misallocated so the investment might lead to a more efficient outcome (land use in the UK would be an example – the planning system enforces many inefficiences); when there is good reason to expect a project will bring about agglomeration externalities, the additional productivity arising from there being more people in one area because there is a deeper pool of labour and skills, and know how can spread more easily; when there is reason to be confident there will be non-marginal benefits that private investors will not capture; and when infrastructure can act as a mechanism to co-ordinate private investment decisions. The latter is interesting because it suggests the prospect of multiple equilibria depending on which place or project is selected as the focal point for co-ordination.

I would add another complication, which is the scope for small changes in transactions costs or frictions to bring about big changes in behaviour. In some contexts a time saving of 10 minutes will be marginal but in others it might tip a lot of people changing their commuting or house purchase patterns. A past example is the switch from dial up internet to broadband; many economists thought this would be a small change, but it turned out to be revolutionary. The behaviour change point makes post hoc evaluations tricky, because the behaviour is endogenous to the infrastructure choices.

Everybody in the workshop broadly agreed about the basic intellectual framework (well, we were almost all economists) but the dilemma is whether it is ever feasible or sensible to allow consideration of the wider benefits. The case against – and in favour of sticking to narrow, conventional cost benefit analysis – says stick to situations where there are clear signals from market prices. For example, is there a big difference in land prices indicating resource misallocation? Otherwise, there is a danger of the kind of mistakes that have always come with ‘picking winners’ in the past. The opposing case for being more open to trying to estimate wider benefits is to ask: what would the country look like if built by economists? It would be a dreary place of functional concrete boxes in a mesh of motorways. The Victorian infrastructure we still rely on would never have been built if subject first to a cost-benefit analysis. Britain used to be considered a world leader in infrastructure but then the use of cost-benefit analysis spread widely, and now we are clearly laggards.

I’m firmly in the camp that we should be looking to develop new techniques and data to inform a wider approach. The UK economy needs infrastructure investment that will make a big difference to productivity and growth, given the self-inflicted economic headwinds we face. We need faster growth in the great provincial cities, and significant investments that will make a step-change difference in the economic well-being of people around the country in terms of air quality, flooding etc. The NIC faces quite a challenge, but also a tremendous opportunity.

My favourite books about infrastructure are Brett Frischmann’s Infrastructure: The Social Value of Shared Resources; and my colleague Richard Agénor’s (more wide-ranging) Public Capital, Growth and Welfare. Ricardo Hausmann has written about the distributional impact of infrastructure (along with natural capital, the most significant capital people on low incomes have access to).

Share