The gorilla problem

I was so keen to read Stuart Russell’s new book on AI, Human Compatible: AI and the Problem of Control, that I ordered it three times over. I’m not disappointed (although I returend two copies). It’s a very interesting and thoughtful book, and has some important implications for welfare economics – an area of the discipline in great need of being revisited, after years – decades – of a general lack of interest.

The book’s theme is how to engineer AI that can be guaranteed to serve human interests, rather than taking control and serving the specific interests programmed into its objective functions and rewards. The control problem is much-debated in the AI literature, in various forms. AI systems aim to achieve a specified objective given what they perceive from data inputs including through sensors. How to control them is becoming an urgent challenge – as the book points out, by 2008 there were more objects than people connected to the internet, giving AI systems ever more extensive access, input and output, to the real world. The potential of AI is their scope to communicate – machines can do better than any number n of humans because they access information that isn’t kept in n separate brains and communicated imperfectly between them. Humans have to spend a ton of time in meetings, machines don’t.

Russell argues that the AI community has been too slow to face up to the probability that machines as currently designed will gain control over humans – keep us at best as pets and at worst create a hostile environment for us, driving us slowly extinct, as we have gorillas (hence the gorilla problem). Some of the solutions proposed by those recognising the problem have been bizarre, such as ‘neural lace’ that permanently connects the human cortex to machines. As the book comments: “If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.”

He proposes instead three principles to be adopted by the AI community:

  • the machine’s only objective is to maximise the realisation of human preferences
  • the machine is initially uncertain about what those preferences are
  • the ultimate source of information about human preferences is human behaviour.

He notes that AI systems embed uncertainty except in the objective they are set to maximise. The utility function and cost/reward/loss function are assumed to be perfectly known. This is an approach shared of course with economics. There is a great need to study planning and decision making with partial and uncertain information about preferences, Russell argues. There are also difficult social welfare questions. It’s one thing to think about an AI system deciding for an individual but what about groups? Utilitarianism has some well-known issues, much chewed over in social choice theory. But here we are asking AI systems to (implicitly) aggregate over individuals and make interpersonal comparisons. As I noted in my inaugural lecture, we’ve created AI systems that are homo economicus on steroids and it’s far from obvious this is a good idea. In a forthcoming paper with some of my favourite computer scientists, we look at the implications of the use of AI in public decisions for social choice and politics. The principles also require being able to teach machines (and ourselves) a lot about the links between human behaviour, the decision environment and underlying preferences. I’ll need to think about it some more, but these principles seem a good foundation for developing AI systems that serve human purposes in a fundamental way, rather than in a short-term instrumental one.

Anyway, Human Compatible is a terrific book. It doesn’t need any technical knowledge and indeed has appendices that are good explainers of some of the technical stuff.  I also like it that the book is rather optimistic, even about the geopolitical AI arms race: “Human-level AI is not a zero sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human level AI without first solving the control problem is a negative sum game. The payoff for everyone is minus infinity.” It’s clear that to solve the control problem cognitive and social scientists and the AI community need to start talking to each other a lot – and soon – if we’re to escape the fate gorillas suffered at our hands.

41M3a7N6caL._SX323_BO1,204,203,200_

Human Compatible by Stuart Russell

 

Productivity machines

I thought Corinna Schlombs’ book Productivity Machines: German Appropriations of American Technology from Mass Production to Computer Automation looked like my kind of book, and I was right. This is a history of technology approach to the concept of productivity as it developed in the 20th century, in the frontier economy – the US – and as it transferred to Europe, especially Germany, and especially through Marshall Plan aid after the Second world War.

The book starts with the early steps taken by the Bureau of Labor Statistics to define productivity and measure it, work that ran alongside its other key activity of defining and measuring price indices. This early definition was clearly focused on technology adoption and use, although with an awareness that contextual factors (closeness to materials for example) would affect the measured production efficiency of bricks or shoes. The earliest work involved narrative accounts of production methods, with surveys and the development of an index, coming later.

In the same period, Henry Ford started to reshape the context for thinking about productivity with his famous $5 wage, enabling the workers who made cars to think about affording to buy them. This paved the way for a wider understanding of productivity in the US as involving the production system including methods of management and – by the postwar era – the American way of life with relatively egalitarian social relations and consumer affluence created demand and growing markets.

The 1920s saw many German visitors touring US plants to understand productivity and took away widely varying messages. Many industrialists concluded that the technology and organisation of the factory was the central issue, others argues workers should work harder and longer, while some – particularly unions – did appreciate the Ford message. Postwar, however, the influence of the US through its aid to Germany meant the introduction of a programme of visits by German workers and industrialists to the US intended to land the message about the importance of the social framework. The distinction between US and German approaches to productivity subsequently centred on the role of unions – local in the US, national collective bargaining for the whole industry in Germany.

The book ends with chapters on IBM (whose welfare capitalism fit well into teh German context) and US computer technology driving automation in the 1960s. Automation was seen differently on the two sides of the Atlantic – as a driver of productivty in the US and as a threat to employment and eater of scarce capital in Germany. By 1971 IBM was dominant in computer markets across Europe, except, at that stage, the UK.

The bottom line I take from the book is that ‘productivty’ is a socially-embedded construct whose meaning has changed over time – something to bear in mind as we fret now about low productivity. Schlombs ends with the question of whether its meaning will change again now – whether ‘productivity’ will be driven by quality improvements – even quality-of-life improvements – and not either working ever harder or consuming ever more. “Productivity Machines has shown that American expectations of economic growth and higher standards of living through increased productivity from technological improvements were unique to the American model and not inherent in capitalist economic relations.”

So, a very interesting read and useful contribution to the inter-economist debate raging about the OECD productviity slowdown.

51Xsf5g0zpL

Festive economics!

This year’s Festival of Economics in Bristol is fast approaching (there are still some tickets available), and we have a wealth of treats including keynotes from Mark Carney, Danielle Walker, Deborah Hargreaves and Carl Frey. There are Ask an Economist sessions, and the Talking Politics podcast live with David Runciman (with Rana Foroohar, Carl Frey and me).

The subjects range from China, through the economics of social media and the digital giants, to the inclusive eonomy and the ‘left behind’ places, from genetics and economics to employee-owned businesses to the economics of social care.

To whet everyone’s appetites, here are the recent and new books authored by this year’s participants. A little pre-reading to whet the appetite….

Red Flags by George Magnus

51smEvvl5dL._SX344_BO1,204,203,200_Women vs Capitalism by Vicky Pryce

The Technology Trap by Carl Frey

51VabazLy7L._SX327_BO1,204,203,200_The Business of Platforms by Anabelle Gawer with co-authors Michael Cusumano and David Yoffie

41rS0XlCETL._SY346_Don’t be Evil: The Case Against Big Tech by Rana Foroohar

31E1-299vOL._SX322_BO1,204,203,200_Britain by Numbers by Stuart Newman

51xJ3I840AL._SX383_BO1,204,203,200_Prosperity for All by Roger Farmer

Wilful Blindness by Margaret Heffernan

Where Power Stops by David Runciman

41fQKsESy7L._SX310_BO1,204,203,200_Rethinking Development and Politics by Meghnad Desai

A Grand Success by Aardman’s David Sproxton with co-author Peter Lord

Social Economics by Joan Costa-font

Last but far from least, my new book, Markets, State and People: Economics for Public Policy is out in January.

41tDMjIpEQL._SX329_BO1,204,203,200_

The Great Game of trade

I was reading about the US-China trade war in Foreign Affairs, an article (The Unwinnable Trade War) strikingly awash with metaphors of conflict – blazing guns, cease fire, détente….) It sent me back to the excellent book by Rebecca and Jack Harding we just published in the Perspectives series, Gaming Trade: Win-Win strategies for the Digital Era. They (an economist and a political risk analyst) use game theory to analyse the interplay between the win-win economics of trade and the zero sum game of geopoltical power. States (and they mean the big blocs, the US, China, Russia, the EU) need to develop strategies simultaneously for military force, information/cyber warfare, and economic soft power. Trade policy is an important weapon in the complicated strategic game.

The book argues that the focus on old-fashioned trade wars is a distraction from the most important front at present (my goodness, impossible to avoid the military metaphors): control of cross-border flows of information, finance and intellectual property. The physical world  and control of material resources is of course still important; but the new frontier is the cyber frontier. Indeed a new column by the digital guru Andrew McAfee highlights the continuing dematerialization of the economy (as I wrote about in The Weightless World, and Danny Quah also back in the 1990s). Russia’s engagement on the cyber-front is all too obvious, with seemingly little effective counter-attack – or even connivance from the Trump family.

I was persuaded by the Hardings’ argument that the current global situation is very dangerous, threatening economic stability, environmental sustainability and peace. The slow growth or worse will encourage the trends toward populism and authoritarianism, and a spiral into more protectionism. If you consider the economic and political whirlwind associated with the collapse of trade in the 1930s, it’s worse now: the World Bank’s new annual World Development Report paints a vivid picture of the extraordinary economic interdependence of the world’s economies. Unravelling these value chains will be catastrophic (Brexit Britain seems to be offering itself as an early test case of the potential damage).

In the end, the book argues, we all lose unless the current great power conflicts – soft, cyber and potentially ‘hard’ – de-escalate. The strategic path from here to the sunlit uplands of a new era of multilateralism is far from obvious. But it does us a favour in drawing attention to the dangers, and to the need for clear strategies to get out of this situation.

51IwghHq-kL._SX317_BO1,204,203,200_

 

Complexity across boundaries

Summer, with its leisurely hours for reading, seems a long time ago. I have managed Barbara Kingsolver’s excellent novel Unsheltered. And also, discovered down one of the by-ways of Twitter or the Internet, Worlds Hidden in Plain Sight, edited by David Krakauer. This is a collection of columns by researchers at the Santa Fe Institute dating back to its establishment in 1984.

SFI is of course the best-known centre for the study of complexity across disciplinary boundaries, and its work on economics is always at least intriguing and often far more. Like any collection of essays this is a mixed bag. One striking feature is how much more focused the later ones become on the social sciences and the humanities, compared to the earlier focus on biology and physics. I don’t know if this is an artefact of the selection or actually reflects the balance of work there, but it’s also obvious I suppose that human activity and society is inherently complex. A few of the essays – although intended for a not-entirely-specialist audience – are utterly incomprehensible to someone who isn’t a disciplinary expert. I suspect one or two were also incomprehensible to their authors. And the most recent batch consist of some disappointingly banal essays for a newspaper.

In between, however, I found much food for thought – particularly on the lessons at disciplinary borders: information science, economics, anthropology, biology…. As one 2011 essay by Krakauer puts it, we have to recognize that “many of our most pressing problems and interesting challenges reside at the boundaries of existing disciplines and require the development of an entirely new kind of sensibility that remains ‘disciplined’ by careful empirical experiment, observation and analysis.”

What else? There’s a brilliant 2009 essay by Ole Peters, entirely justifying the cost of the book, explaining in a few pages with great clarity why ignoring the non-ergodicity of economic and financial variables has led to catastrophic policy errors. A 2012 Robert May essay alerted me to a calculation I’ve not spotted before by Ben Friedman (my thesis adviser) that before the crisis running the US financial system “took one third of all profits earned on investment capital,” up from 10% three decades earlier.

And I’ve been thinking about information – the way ideas and imagination, not mechanical or physical constraints, limit social progress; and puzzling how the role of information in energy use and social complexity, a running thread. One day I’m going to have to get my head properly around information theory.

Like all collections, this book has the merit of being easy to dip into and read in chunks. It’s a great overview of the work of SFI, one of the most interesting research centres anywhere. More power to their elbow.

31rf40dFhDL._SX331_BO1,204,203,200_Worlds Hidden in Plain Sight