Good economics and expertise

“A recurring theme of this book is that it is unreasonable to expect markets to always deliver outcomes that are just, acceptable, or even efficient.” So write Abhijit Banerjee and Esther Duflo at the start of one chapter of their new book – fabulously timed to coincide with the Nobel they just shared with Michael Kremer – Good Economics for Hard Times: Better Answers to Our Biggest Problems. I really enjoyed reading the book but it left me with one nagging question, of which more later.

416y746UZDL._SX321_BO1,204,203,200_The Introduction is titled, ‘MEGA: Make Economics Great Again’, and sets out the framing of the rest of the book, which is to apply the methods of economic analysis to a range of big problems. Thus there are chapters on immigration and trade, technology, climate change, preferences and social choice, the welfre state. The model for each chapter is a beautifully clearly-written and quite balanced summary of the frontier of economic research and therefore of the conclusions economic expertise can justify when it comes to these major challenges. This makes it, by the way, an excellent teaching resource. There is relatively little about either the RCT methodology or behavioural economics for which the duo are famous, although these of course feature. A lot of the whole economic toolkit is covered.

For example, the chapter on immigration and its impact on the host country and its workers is a model of clarity in setting out why migrants may be complements rather than substitutes for native born workers, and what choices profit-maximizing employers might make. It gives the example of the 1964 expulsion of Mexican workers from California, on the grounds that they depressed local wages. “Their exit did nothing for the natives: wages and employment did not go up.” The reason was that farmers mechanized instead. For instance, adoption of tomato harvesting machines rose from zero in 1964 to 100% in 1967. The farmers also switched to crops for which harvesting machines were available, and for some time stopped growing others such as lettuce and strawberries.

The chapter on the likely impact of technology on jobs is similarly clear and yet also notes that there is scant consensus among economists on this. I thought this was the weakest chapter, but perhaps that’s because it’s my area and they try to cover a vast literature – taking in competition policy and inequality due to technology and positional goods and the nature of economic growth….. One could expend thousands of words on these. 🙂

The book is a bestseller and derservedly so. My big reservation about it is the way this demonstration of the analytical power of economic expertise lands in these ‘we’ve had enough of experts’ times. This is as much a matter of tone as content, but I found statements like this actually a bit uncomfortable: “This underscores the urgent need to set ideology aside and advocate for things most economists agree on, based on the recent research.” I have come to believe this deep urge among economists to separate the positive and normative – dating back to Lionel Robbins – is a mistake. Dani Rodrik’s Economics Rules, another excellent overview of ‘good economics’ is much more nuanced and less economics-imperialistic in this respect.

Economic analysis rocks, as does evidence, but we have to engage with the ideology too. Good economics is about more than technical expertise.

Share

Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)

Share

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps

 

Share

The gorilla problem

I was so keen to read Stuart Russell’s new book on AI, Human Compatible: AI and the Problem of Control, that I ordered it three times over. I’m not disappointed (although I returend two copies). It’s a very interesting and thoughtful book, and has some important implications for welfare economics – an area of the discipline in great need of being revisited, after years – decades – of a general lack of interest.

The book’s theme is how to engineer AI that can be guaranteed to serve human interests, rather than taking control and serving the specific interests programmed into its objective functions and rewards. The control problem is much-debated in the AI literature, in various forms. AI systems aim to achieve a specified objective given what they perceive from data inputs including through sensors. How to control them is becoming an urgent challenge – as the book points out, by 2008 there were more objects than people connected to the internet, giving AI systems ever more extensive access, input and output, to the real world. The potential of AI is their scope to communicate – machines can do better than any number n of humans because they access information that isn’t kept in n separate brains and communicated imperfectly between them. Humans have to spend a ton of time in meetings, machines don’t.

Russell argues that the AI community has been too slow to face up to the probability that machines as currently designed will gain control over humans – keep us at best as pets and at worst create a hostile environment for us, driving us slowly extinct, as we have gorillas (hence the gorilla problem). Some of the solutions proposed by those recognising the problem have been bizarre, such as ‘neural lace’ that permanently connects the human cortex to machines. As the book comments: “If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.”

He proposes instead three principles to be adopted by the AI community:

  • the machine’s only objective is to maximise the realisation of human preferences
  • the machine is initially uncertain about what those preferences are
  • the ultimate source of information about human preferences is human behaviour.

He notes that AI systems embed uncertainty except in the objective they are set to maximise. The utility function and cost/reward/loss function are assumed to be perfectly known. This is an approach shared of course with economics. There is a great need to study planning and decision making with partial and uncertain information about preferences, Russell argues. There are also difficult social welfare questions. It’s one thing to think about an AI system deciding for an individual but what about groups? Utilitarianism has some well-known issues, much chewed over in social choice theory. But here we are asking AI systems to (implicitly) aggregate over individuals and make interpersonal comparisons. As I noted in my inaugural lecture, we’ve created AI systems that are homo economicus on steroids and it’s far from obvious this is a good idea. In a forthcoming paper with some of my favourite computer scientists, we look at the implications of the use of AI in public decisions for social choice and politics. The principles also require being able to teach machines (and ourselves) a lot about the links between human behaviour, the decision environment and underlying preferences. I’ll need to think about it some more, but these principles seem a good foundation for developing AI systems that serve human purposes in a fundamental way, rather than in a short-term instrumental one.

Anyway, Human Compatible is a terrific book. It doesn’t need any technical knowledge and indeed has appendices that are good explainers of some of the technical stuff.  I also like it that the book is rather optimistic, even about the geopolitical AI arms race: “Human-level AI is not a zero sum game and nothing is lost by sharing it. On the other hand, competing to be the first to achieve human level AI without first solving the control problem is a negative sum game. The payoff for everyone is minus infinity.” It’s clear that to solve the control problem cognitive and social scientists and the AI community need to start talking to each other a lot – and soon – if we’re to escape the fate gorillas suffered at our hands.

41M3a7N6caL._SX323_BO1,204,203,200_

Human Compatible by Stuart Russell

 

Share

Productivity machines

I thought Corinna Schlombs’ book Productivity Machines: German Appropriations of American Technology from Mass Production to Computer Automation looked like my kind of book, and I was right. This is a history of technology approach to the concept of productivity as it developed in the 20th century, in the frontier economy – the US – and as it transferred to Europe, especially Germany, and especially through Marshall Plan aid after the Second world War.

The book starts with the early steps taken by the Bureau of Labor Statistics to define productivity and measure it, work that ran alongside its other key activity of defining and measuring price indices. This early definition was clearly focused on technology adoption and use, although with an awareness that contextual factors (closeness to materials for example) would affect the measured production efficiency of bricks or shoes. The earliest work involved narrative accounts of production methods, with surveys and the development of an index, coming later.

In the same period, Henry Ford started to reshape the context for thinking about productivity with his famous $5 wage, enabling the workers who made cars to think about affording to buy them. This paved the way for a wider understanding of productivity in the US as involving the production system including methods of management and – by the postwar era – the American way of life with relatively egalitarian social relations and consumer affluence created demand and growing markets.

The 1920s saw many German visitors touring US plants to understand productivity and took away widely varying messages. Many industrialists concluded that the technology and organisation of the factory was the central issue, others argues workers should work harder and longer, while some – particularly unions – did appreciate the Ford message. Postwar, however, the influence of the US through its aid to Germany meant the introduction of a programme of visits by German workers and industrialists to the US intended to land the message about the importance of the social framework. The distinction between US and German approaches to productivity subsequently centred on the role of unions – local in the US, national collective bargaining for the whole industry in Germany.

The book ends with chapters on IBM (whose welfare capitalism fit well into teh German context) and US computer technology driving automation in the 1960s. Automation was seen differently on the two sides of the Atlantic – as a driver of productivty in the US and as a threat to employment and eater of scarce capital in Germany. By 1971 IBM was dominant in computer markets across Europe, except, at that stage, the UK.

The bottom line I take from the book is that ‘productivty’ is a socially-embedded construct whose meaning has changed over time – something to bear in mind as we fret now about low productivity. Schlombs ends with the question of whether its meaning will change again now – whether ‘productivity’ will be driven by quality improvements – even quality-of-life improvements – and not either working ever harder or consuming ever more. “Productivity Machines has shown that American expectations of economic growth and higher standards of living through increased productivity from technological improvements were unique to the American model and not inherent in capitalist economic relations.”

So, a very interesting read and useful contribution to the inter-economist debate raging about the OECD productviity slowdown.

51Xsf5g0zpL

Share