Yesterday brought the launch of a new and revised edition of Doing Capitalism in the Innovation Economy by William Janeway. Anybody who read the first (2012) edition will recall the theme of the ‘three player game’ – market innovators, speculators and the state – informed by Keynes and Minsky as well as Janeway’s own experience combining an economics PhD with his experience shaping the world of venture capital investment.
The term refers to how the complicated interactions between government, providers of finance and capitalists drive technological innovation and economic growth. The overlapping institutions create an inherently fragile system, the book argues – and also a contingent one. Things can easily turn out differently.
The book starts with a more descriptive first half, including Janeway’s “Cash and Control” approach to investing in new technologies, and also an account of how the three players in the US shaped the computer revolution. This is an admirably clear but nuanced history emphasising the important role of the state – through defense spending in particular – but also the equally vital private sector involvement. I find this sense of the complicated and path dependent interplay far more persuasive than simplistic accounts emphasising either the government or the market.
The second half of the book takes an analytical turn, covering financial instability, and the role of state action. It’s fair to say Janeway is not a fan of much of mainstream economic theory (at least macro and financial economics). He includes a good deal of economic history, and Carlota Perez features alongside Minsky in this account.
The years between the two editions of the book, characterised by sluggish growth, flatlining productivity, and also extraordinary changes in the economy and society brought about by technology perhaps underline the reasons for this lack of esteem. After all, there do seem to be some intractable ‘puzzles’, and meanwhile, just in time for publication, Italy looks like it might be kciking off the Euro/banking crisis again. The experience of the past few years also helps explain the rationale for a second edition. That’s quite a lot of economic history and structural change packed into half a decade.
Although I read the first edition, I’m enjoying the second as well. And for those who didn’t read the book first time around, there’s a treat in store.
As I was looking at publishers’ websites for my New Year round up of forthcoming books, I noticed OUP billing Paul Geroski’s The Evolution of New Markets as due out in January 2018. This is odd as it was published in 2003, and Paul died in 2005; it isn’t obvious why there’s a reprint now. He was Deputy Chairman and then Chairman of the Competition Commission during my years there, and was a brilliant economist as well as a wonderful person. I learnt an amazing amount from being a member of his inquiry team.
Anyway, the catalogue entry for the reprint sent me back to my copy of the book, along with Fast Second, which Paul co-authored with Constantinos Markides. Fast Second challenges the received wisdom of first mover advantage: Amazon was not the first online books retailer, Charles Schwab not the first online brokerage, and so on. The opportunity lies in between being early in a radical new market and being late because a dominant design and business model have already emerged. The Fast Second business competes for dominance – and supposed first mover advantages are usually fast second advantages.
Paul’s book The Evolution of New Markets – in which I found a handwritten note he’d sent me with it, which made for an emotional moment – does what it says, and explores models of innovation diffusion – so in other words, models of S-curves. His view was that the epidemic model of S-curves, which seems to be the standard one, was a misleading metaphor. He argued that information cascades best fit the observed reality. The epidemic model assumes that a new technology is adopted as information about it is diffused. Each user passes on the info to the next user. However, as the book points out, information diffuses far faster than use. Users need to be persuaded rather than just informed.
More appropriate is a model whereby new technologies arrive in a number of variants at slightly different times, making adoption risky and costly – especially when there are network externalities or when people realize there is going to be a standards battle. Most new products fail, after all. But once one variant starts to dominate, the cost and risk decline and adoption will occur much faster.
It’s a persuasive argument, and a very readable book. Although the list price is surprisingly high for a short paperback, one can be confident second hand copies are just as good.
I polished off Eric von Hippel’s Free Innovation on my Washington trip. It’s an interesting, short book looking at the viability and character of innovation by individuals – alone or co-operating in communities. It is free in two senses: the work involved is not paid; and the innovations – or at least their design – is not charged for, although it may subsequently be commercialised by the inventors or by other businesses. The viability of free innovation has been greatly extended by digital technology and the internet: there is more accessible useful information, it is easier and cheaper to co-ordinate among a group. The diffusion of innovations is also easier, although rarely as extensive as when a commercial business takes them up and markets them. In fact, von Hippel argues that there are some strong complementarities between free innovation and commercial vendors, as the latter can bring the scale economies of production and marketing, while the former can enhance the use case, the complementary know-how, that increase the value of whatever it is.
The book has a little theorising, some survey evidence on the wide scope of free innovation, and plenty of nice examples. It ends with a couple of chapters on how to safeguard the legal rights of free innovation and how the pehnomenon might be encouraged. The scope is what interests me particularly. I had already been thinking about phenomena such as open source software as a voluntary public good, which competes with marketed goods – compare Apache with Microsoft’s server software (as Shane Greenstein and Frank Nagle do here). There is clearly a growing amount of substitution across the production boundary going on.
The surveys reported in this book seem to indicate that millions of people are innovating (5-6% of respondents in the UK and US, Finland and Canada) – but equally, some of the innovations are minor contributors to economic welfare and one cannot imagine them ever having a wide market or competing with marketed equivalents. The question is how to get a handle on the scope and scale of all the open source, public good, innovation.
I’ve been reading old articles about about hedonic adjustment and followed one trail to a 1983 paper by William Nordhaus about the productivity slowdown between the 1960s and 1970s. He wrote: “Is it not likely that we have temporarily exhausted many of the avenues that were leading to rapid technological change?” (1981 working paper version here). Timothy Bresnahan and Robert Gordon pounce on this in their introduction to the 1996 NBER volume they edited on The Economics of New Goods: “The world is running out of new ideas just as Texas has run out of oil. Most new goods now, compared with those of a century ago, are not founding whole new product categories or meeting whole new classes of needs.” (Apropos of Texan oil, see this: Mammoth Texas oil discovery biggest ever in USA, November 2016.)
Gordon has, of course, doubled down on this argument in his magisterial The Rise and Fall of American Growth. (It is btw a great holiday read – curl up under a blanket for a couple of days.)
This reminded me I’d seen this post by Alex Tabarrok at Marginal Revolution: A Very Depressing Paper on the Great Stagnation.
I haven’t yet read the paper it refers to, nor the earlier Jones one, and will do of course. It’s just that it seems we’ve been running out of ideas for over 30 years. I’ll say nothing about sequencing the genome and the consequent medical advances, new materials such as graphene, advances in photovoltaics, 3G/wifi/smartphones, not to mention current progress in AI, robotics, electric cars, interplanetary exploration. Oh, and chocolate HobNobs, introduced in 1987. Excellent for productivity.
For the time being, I’m going to stick with the hypothesis that we haven’t run out of ideas.
Having been guiltily reading a thriller or two, as well as David Olusoga’s Black and British, this is a brief post about an economics paper I’ve read, Paul David on Zvi Griliches and the Economics of Technology Diffusion. (Zvi was one of my econometrics teachers at Harvard, a very nice man who was still so obviously brilliant that he was a bit scary. He would ask a question which might be completely straightforward but one would have to scrutinise it carefully before answering, just in case.) Anyway, the Paul David paper is a terrific synopsis of three areas of work which are implicitly linked: how technologies diffuse in use; lags in investment, as new technologies are embodied in capital equipment or production processes; and multifactor productivity growth.
As David writes here: “The political economy of growth policy has promoted excessive attention to innovation as a determinant of technological change and productivity growth, to the neglect of attention to the role of conditions affecting access to knowledge of innovations and their actual introduction into use. The theoretical framework of aggregate production function analysis, whether in its early formulation or in the more recent genre of endogenous growth models, has simply reinforced that tendency.” He of course has been digging away at the introduction into use of technologies since before his brilliant 1989 ‘The Dynamo and the Computer‘. Another important point he makes here is that there has been little attention paid to collecting the microdata that would permit deeper study of diffusion processes, not least because the incentives in academic economics do not reward the careful assembly of datasets.
By coincidence, the paper concludes with a description of a virtuous circle in innovation whereby positive feedback to revenues and profits from a successful innovation lead to both learning about what customers value and further investment in R&D. Here is the diagram from the paper.
This was exactly the argument made yesterday at a Bank of England seminar I attended by Hal Varian (now chief economist at Google, known to all economics students as author of Microeconomic Analysis and Intermediate Microeconomics, and also with Carl Shapiro of Information Rules, still one of the best texts on digital economics). Varian argued there are three sources of positive feedback: demand side economies of scale (network effects), classic supply side economies of scale arising often from high fixed costs, and learning-by-doing. He wanted to make the case that there are no competition issues for Google, and so suggested that (a) search engines are not characterised by indirect network effects because search users don’t care how many advertisers are present; (b) fixed costs have vanished – even for Google-sized companies – because the cloud; (c) experience is a good thing, not a competitive barrier, and anyway becomes irrelevant when a technological jump causes an upset, as in Facebook toppling MySpace. I don’t think his audience shed its polite scepticism. Still, the learning-by-doing as a positive feedback mechanism argument is interesting.