Weightlessness Redux

The weeks are flying past. I’ve read recently an array of non-economics books (Owen Hatherley’s Trans-Europe Express, Susan Orlean’s The Library Book, Francis Spufford’s True Stories, a couple of the re-issued Maigrets) and also Matt Stoller’s Goliath and Andrew McAfee’s More from Less. I’ll review Goliath in my next post.

More from Less: the surprising story of how we learned to prosper using fewer resources – and what happens next (to give it the full overly wordy subtitle) is presumably aimed at the airport bookshop market. It’s written in a very accessible way and it summarises a lot of interesting research – although not McAfee’s own.

The main point is that the material resource intensity of economic growth has been declining in the rich western economies. This is set in an account of the origins of modern capitalism in the Industrial Revolution – emphasising the importance of ideas and contestability through markets – and the urgent debate about the trade-off between humans gettern better of (escaping the Malthusian trap) and the damaging environmental impact of growth. The dematerialisation of the economy is helping improve the terms of that trade-off.

A major difficulty I have in reviewing this is that, although it’s an enjoyable read, I wrote a book making the same point in 1997, The Weightless World (out of print, free pdf here). There’s no reason at all McAfee should have read it as I was a nobody, and it was a long time ago. But it does mean that (perhaps uniquely) I can’t find anything that’s new in More From Less. The research he cites concerning dematerialisation dates from 2012 (Chris Goodall) and 2015 (Jesse Ausubel, The Return of Nature) – so this is another example of a phenomenon being discovered twice; because there was similar work in the mid-1990s on material flow accounts, on which I based my book. Alan Greenspan even made a speech about it in 1996. It’s a noteworthy phenomenon so I hope McAfee does alert new readers to it. He puts far more emphasis on environmental challenges than I did back in the more innocent 1990s; my focus was more on the socio-economic consequences of a dematerializing economy.

However, the weightlessness or dematerialization phenomenon doesn’t deliver a knockout blow to the degrowth argument that it is not enough to have a reduced but still positive material intensity to growth. Tim Jackson is the most thoughtful advocate of this argument – see this recent essay in Science. It may be that we need to find a way to read more lightly on the planet in absolute terms as well as relative ones, although I’ll welcome weightless growth as better than the weighty alternative. And – as even no growth is politically divisive, never mind degrowth – the issues raised in More From Less are difficult and important ones.

41xPYgyWjVL._SX331_BO1,204,203,200_

Share

Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)

Share

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps

 

Share

The gorilla problem

I was so keen to read Stuart Russell’s new book on AI, Human Compatible: AI and the Problem of Control, that I ordered it three times over. I’m not disappointed (although I returend two copies). It’s a very interesting and thoughtful book, and has some important implications for welfare economics – an area of the discipline in great need of being revisited, after years – decades – of a general lack of interest.

The book’s theme is how to engineer AI that can be guaranteed to serve human interests, rather than taking control and serving the specific interests programmed into its objective functions and rewards. The control problem is much-debated in the AI literature, in various forms. AI systems aim to achieve a specified objective given what they perceive from data inputs including through sensors. How to control them is becoming an urgent challenge – as the book points out, by 2008 there were more objects than people connected to the internet, giving AI systems ever more extensive access, input and output, to the real world. The potential of AI is their scope to communicate – machines can do better than any number n of humans because they access information that isn’t kept in n separate brains and communicated imperfectly between them. Humans have to spend a ton of time in meetings, machines don’t.

Russell argues that the AI community has been too slow to face up to the probability that machines as currently designed will gain control over humans – keep us at best as pets and at worst create a hostile environment for us, driving us slowly extinct, as we have gorillas (hence the gorilla problem). Some of the solutions proposed by those recognising the problem have been bizarre, such as ‘neural lace’ that permanently connects the human cortex to machines. As the book comments: “If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.”

He proposes instead three principles to be adopted by the AI community:

  • the machine’s only objective is to maximise the realisation of human preferences
  • the machine is initially uncertain about what those preferences are
  • the ultimate source of information about human preferences is human behaviour.

He notes that AI systems embed uncertainty except in the objective they are set to maximise. The utility function and cost/reward/loss function are assumed to be perfectly known. This is an approach shared of course with economics. There is a great need to study planning and decision making with partial and uncertain information about preferences, Russell argues. There are also difficult social welfare questions. It’s one thing to think about an AI system deciding for an individual but what about groups? Utilitarianism has some well-known issues, much chewed over in social choice theory. But here we are asking AI systems to (implicitly) aggregate over individuals and make interpersonal comparisons. As I noted in my inaugural lecture, we’ve created AI systems that are homo economicus on steroids and it’s far from obvious this is a good idea. In a forthcoming paper with some of my favourite computer scientists, we look at the implications of the use of AI in public decisions for social choice and politics. The principles also require being able to teach machines (and ourselves) a lot about the links between human behaviour, the decision environment and underlying preferences. I’ll need to think about it some more, but these principles seem a good foundation for developing AI systems that serve human purposes in a fundamental way, rather than in a short-term instrumental one.

Anyway, Human Compatible is a terrific book. It doesn’t need any technical knowledge and indeed has appendices that are good explainers of some of the technical stuff.  I also like it that the book is rather optimistic, even about the geopolitical AI arms race: “Human-level AI is not a zero sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human level AI without first solving the control problem is a negative sum game. The payoff for everyone is minus infinity.” It’s clear that to solve the control problem cognitive and social scientists and the AI community need to start talking to each other a lot – and soon – if we’re to escape the fate gorillas suffered at our hands.

41M3a7N6caL._SX323_BO1,204,203,200_

Human Compatible by Stuart Russell

 

Share

Complexity across boundaries

Summer, with its leisurely hours for reading, seems a long time ago. I have managed Barbara Kingsolver’s excellent novel Unsheltered. And also, discovered down one of the by-ways of Twitter or the Internet, Worlds Hidden in Plain Sight, edited by David Krakauer. This is a collection of columns by researchers at the Santa Fe Institute dating back to its establishment in 1984.

SFI is of course the best-known centre for the study of complexity across disciplinary boundaries, and its work on economics is always at least intriguing and often far more. Like any collection of essays this is a mixed bag. One striking feature is how much more focused the later ones become on the social sciences and the humanities, compared to the earlier focus on biology and physics. I don’t know if this is an artefact of the selection or actually reflects the balance of work there, but it’s also obvious I suppose that human activity and society is inherently complex. A few of the essays – although intended for a not-entirely-specialist audience – are utterly incomprehensible to someone who isn’t a disciplinary expert. I suspect one or two were also incomprehensible to their authors. And the most recent batch consist of some disappointingly banal essays for a newspaper.

In between, however, I found much food for thought – particularly on the lessons at disciplinary borders: information science, economics, anthropology, biology…. As one 2011 essay by Krakauer puts it, we have to recognize that “many of our most pressing problems and interesting challenges reside at the boundaries of existing disciplines and require the development of an entirely new kind of sensibility that remains ‘disciplined’ by careful empirical experiment, observation and analysis.”

What else? There’s a brilliant 2009 essay by Ole Peters, entirely justifying the cost of the book, explaining in a few pages with great clarity why ignoring the non-ergodicity of economic and financial variables has led to catastrophic policy errors. A 2012 Robert May essay alerted me to a calculation I’ve not spotted before by Ben Friedman (my thesis adviser) that before the crisis running the US financial system “took one third of all profits earned on investment capital,” up from 10% three decades earlier.

And I’ve been thinking about information – the way ideas and imagination, not mechanical or physical constraints, limit social progress; and puzzling how the role of information in energy use and social complexity, a running thread. One day I’m going to have to get my head properly around information theory.

Like all collections, this book has the merit of being easy to dip into and read in chunks. It’s a great overview of the work of SFI, one of the most interesting research centres anywhere. More power to their elbow.

31rf40dFhDL._SX331_BO1,204,203,200_Worlds Hidden in Plain Sight

 

Share