Who benefits from research and innovation?

I’ve been pondering a report written by my friend and Industrial Strategy Commission colleague Richard Jones (with James Wilsdon), The Biomedical Bubble. The report calls for a rethinking of the high priority given to biomedical research in the allocation of research funding, and arguing for more attention to be paid to the “social, environmental, digital and behavioural determinants of health”. It also calls for health innovation to be considered in the context of industrial strategy – after all, in the NHS the UK has a unique potential market for healthcare innovations. It points out the there are fewer ill people in the places where most biomedical and pharmaceutical research is carried out, thanks the the UK’sregional imbalances. It also points out that, despite all the brilliant past discoveries, the sector’s productivity is declining:

“In the 1960s, by some measures a golden age of drug discovery, developing a successful
drug cost US$100 million on average. Since then, the number of new drugs developed per
billion (inflation adjusted) dollars has halved every nine years. Around 2000, the cost per
new drug passed the US$1 billion dollar milestone, and R&D productivity has since fallen
for another decade.”

All of this seems well worth debating, for all its provocation to the status quo – and this is a courageous argument given how warm and cuddly we all feel about new medicines. I firmly believe more attention should be paid to the whole system from basic research to final use that determines the distribution of the benefits of innovation, rather than – as we do now – treating the direction of research and innovation as somehow exogenous and worrying about the distributional consequences. This goes for digital, or finance, say, as well as pharma. What determines whether there are widely-shared benefits – or not?

Serendipitously, I happened to read a couple of related articles in the past few days, although both concerning the US. One was this BLS report on multi-factor productivity, which highlights pharma as a sectors making one of the biggest contributions to the US productivity slowdown (see figure 3). And this very interesting Aeon essay about the impact of financial incentives on US pharma research. It speaks to my interest in understanding the whole system effects of research in this domain. Given that this landscape in terms of both research and commerce is US-dominated, this surely makes the question of how the UK spends its own research money all the more relevant? As The Biomedical Bubble asks:

“[T]he importance of the biotechnology sector has been an article of faith for UK
governments for more than 20 years, even when any notion of industrial strategy in other
sectors was derided. So the failure of the UK to develop a thriving biotechnology sector
at anything like the scale anticipated should prompt reflection on our assumptions about
how technology transfer from the science base occurs. The most dominant of these is that
biomedical science would be brought to market through IP-based, venture capital funded
spin-outs. This approach has largely failed, and we are yet to find an alternative.”
For it seems the model is no longer serving the US all that well either – not economy-wide innovation and productivity, and not the American population, which has worth health outcomes at higher cost that any other developed economy. There are some challenging questions here, fundamentally: who benefits from research and innovation, how should the public good being funded by taxpayers be defined and assessed, and what funding and regulatory structures would actually ensure the gains are widely shared?

Made by Humans: the AI condition is very human indeed

A guest review by Benjamin Mitra-Kahn, Chief Economist, IP Australia

There is a lot of press about the coming – or going – of artificial intelligence, and in Made By Humans: The AI Condition  Ellen Broad has written a short but comprehensive account of the state-of-play which deserves to be read by anyone wanting to know what is happening in AI today, certainly if you want to get in on the conversation.

The book is very contemporary, and if you haven’t had the time to attend every conference and workshop on AI since 2015, then you’re in luck. Broad has been to them all, and this book will catch you up on all the developments. The book also offers a series of insights into the challenges that AI and big data present – because it is about both – and the questions we should ask ourselves. These are not the humdrum questions such as who a self-driving car should choose to crash into (although a randomized element is suggested), but some bigger and much more interesting questions about whether we need to be able to inspect the algorithm that made the decision. Does the algorithm need to be open source or does it need to be exposed to expert review to ensure best practice, and should the data that trained the algorithm be openly accessible or available for peer-review. Using every example about data and AI from the last three years, Broad steps through the issues under the hood that are only now being thought about.

This naturally brings up the question of government regulation. This is something Broad has changed her mind about, which she discusses openly in a book that moves between a technology story, personal discovery and ethical discussions. There is a role for regulation says Broad, and the fact that we don’t yet know what that regulation could be, or should be, is handled with some elegance. Technology is not a nirvana , computer code sometimes held together with “peanut butter and goblins” and written by people who are busy, under-funded or just average. Simply aiming to ‘regulate AI’ however is akin to wanting to regulate medicine: It is complex, dependent on who you impact, their ability to engage, and the risks as well as the situation. It is a human-to-human decision ultimately. Not perhaps the argument one expects in a book on AI by the ex-director of policy for the Open Data Institute and previous head of the Australian Digital Alliance. But it is about humans, and the AI condition is about humanity – about fairness, intelligibility, openness and diversity according to Broad.

The book finishes with US Senators questioning Facebook about Cambridge Analytica, and the recent implementation of the GDPR (data governance, not a new measure of GDP), which quickly dates the book, but that is a choice the author makes explicitly. This book is about the current conversation on big data and AI, and it is about participating in that conversation. It is not about the last 50 years of ethics and the history of computers. There is an urgency to the writing, and as someone interested in this, I found myself updated in places, and challenged in others. Reading this book will allow anyone to particpate in the AI debate, knowing what Rahimi’s warning about Alchemy and AI is, being able to discuss the problems around the COMPAS sentencing software, or seeing why Volkswagen’s pollution scandal was a data and software scandal first. If this is a conversation you want to engage with, Broad’s book is an excellent starting point and update.

[amazon_link asins=’B07FXTGMGN’ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’af46feca-98b8-11e8-a10d-1d686ce27dc3′]

 

 

Work (and more) in digital times

This week I’ve been dipping in to Work in the Digital Age: Challenges of the Fourth Industrial Revolution, edited by Neufeind, O’Reilly and Ranft. This is a collection of short essays brought togoether by Policy Network, the centre-left ‘progressive’ think tank. It’s a chunky book, starting with sections essays on prospects for employment and the character of work. These cover, for example, the likely impact of automation in destroying and creating jobs, and the nature of work in the ‘gig’ economy. A section on labour relations and the welfare state follows. There are then chapters on individual European countries, ordered according to the ‘digital density’: Scandinavia and the Netherlands are classed as high, the UK and Germany medium, France, Italy and Central and Eastern Europe as low. There are also chapters on the US, Canada and India. The comparisons between countries were at the heart of the project, and I admit to not having read these chapters.

Given that I read so much of the economic literature on these issues, I haven’t found anything startlingly new so far, although there are some interesting perspectives. For example Martin Kenney and John Zysman consider the question of financing technology start-ups when they face a long period of losses because of what’s known in the platform literature as the chicken and egg problem: a platform needs users on both sides – riders as well as drivers for instance – because it won’t attract riders without enough drives and won’t have enough drivers unless it has a user base. The winner-take-all success stories then look for a long period of rents to recover those early losses, although many platforms simply fail. The essay argues that it is not clear whether this financing model is creating economic and social value. (I argue in a forthcoming paper that this is one aspect of the wider failure of competition economics to have figured out how to compare static welfare gains and losses to dynamic ones.)

In other chapters, Ursula Huws et al report new surveys on the extent of gig work or crowd work – from 9% in the UK to 22% in Italy, usually as part of a broader spectrum of casual work; Monique Kremer and Robert Went set out an agenda for ensuring automation does not increase inequality, covering the direction of robotisation, the enhancement of complementary skills, and distributional policy instruments; and in an introductory essay Luc Soete on the productivity paradox discusses similarities with and differences from previous technological revolutions. The final chapter sets out a reform agenda – education and training; work transitions; social protection; redistributive taxes and transfers; and investing in infrastructure and innovation. This is high level stuff, and therefore a bit motherhood and apple pie. Having contributed essays to this kind of collection myself, I know this pitch of generality is inevitable, but do ache for some policy specifics as opposed to ‘a new inclusive narrative’.

For an overview of the technology and work debate, this is a useful volume, though, and it can be downloaded free from here. It’s certainly a good place to start for a comparative perspective, and the references to country-specific literature look really useful.

[amazon_link asins=’B07D5ZZD77′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’35442710-9322-11e8-93ed-f1b496423b7d’]

The not-so-secret secrets of research

The Secret Life of Science: How it Really Works and Why It Matters by Jeremy Baumberg won’t hold many surprises for economists working in academia. The increasing role of publication metrics in career prospects, even though everyone knows them to be counter-productive or even pernicious. The narrowing of scholarly horizons within disciplinary silos partly for this reason and partly (in the UK) because of the REF exercise. The creaking peer review system. The debate about open access and the monopoly power of certain journal publishers. I don’t know whether the same is true in the humanities and other social sciences, but the description of the systemic pressures and the way they make it ever harder to allow intellectual curiosity and boundary-crossing work free rein – for some very good reasons – makes this book a reflection on more than the natural sciences, but rather on the institutional framework for research as a whole, into which citizens pour a good deal of funding.

There are, however, additional issues in the sciences, not least the very high cost of equipment and facilities in some areas, and the failure of the funding system as a whole to be able to reflect on and implement societal priorities. Another difference is the institutional framework, with much scientific research (to varying degrees across countries – there are interesting figures in the book) occurring in the private sector. Baumberg also discusses the ever-rising number of scientific researchers, in what seems to be a sort of winner-takes-all dynamic of funding concentrating in elite groups and no signs of increasing diversity, producing seemingly ever-decreasing returns.

Although the issues may be familiar, the book usefully presents them all as a combined system challenge. It is pretty factual and even handed, but one ends with a strong sense of the need for some system-wide reforms. Baumberg has no silver bullet solution, and quite right too. He makes some suggestions such as introducing other kinds of metrics than citations, finding some ‘anarchic’ ways to fund science, creating better and different career structures for postdocs.

I read the book just after Richard Jones’s and James Wilsdon’s thought-provoking and trenchant report on ‘biomedical bubble’ in the UK. I doubt The Secret Life of Science will appeal to the general audience as it’s much more about the institutional framework than about the scientific research. But although researchers will already know – and live in their daily lives – the issues flagged up in the book, it’s a timely warning that the scientific endeavour that has brought our societies astonishingly greater prosperity and improvements in the quality of life is sclerotic and failing to deliver for the societies funding research. Hard as it may be for a young researcher struggling under all these pressures to regard herself as part of the despised ‘elite’, that’s the big issue facing scientific and other research. Time to tackle it.

[amazon_link asins=’0691174350′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’d3269161-8fdb-11e8-8b3d-e1165f417ac3′]

Taking time seriously in economics

I’ve been much taken with Consumption Takes Time: Implications for Economic Theory by Ian Steedman, published in 2001 based on  his Graz Schumpeter lectures. Prior to reading it, I was aware only of Becker’s famous 1965 paper (which I cite in my forthcoming paper on the implications of digital technologies for the production boundary) and of Jonathan Gershuny’s As Time Goes By, and his work on time use surveys.

Steedman works through basic microeconomic theory when a time identity (all time must be used up) and the fact that consumption takes time are included. The results are rather sweeping. Non-satiation fails for obvious reasons. There are always inferior goods – in fact, always Giffen goods and Veblen goods. Small price changes can lead to discontinuously large quantity changes. The existence of a general equilibrium is not clear.

Although Becker’s paper is often cited, time to produce (his focus) and time to consume are not taken seriously in economic theory. They should be, and all the more so in a services-intensive economy where technology is above all reallocating people’s time use and making some services far more efficient (albeit in a way we never measure).

Why this lacuna? I’d guess it’s because the analytics are complicated and there’s no data (absent proper time use surveys). Not a good enough excuse. Economies exist in time and space – and so do economic agents.

[amazon_link asins=’0415406382′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’bd80bf08-8dad-11e8-805d-93e0674ec278′]