Good economics and expertise

“A recurring theme of this book is that it is unreasonable to expect markets to always deliver outcomes that are just, acceptable, or even efficient.” So write Abhijit Banerjee and Esther Duflo at the start of one chapter of their new book – fabulously timed to coincide with the Nobel they just shared with Michael Kremer – Good Economics for Hard Times: Better Answers to Our Biggest Problems. I really enjoyed reading the book but it left me with one nagging question, of which more later.

416y746UZDL._SX321_BO1,204,203,200_The Introduction is titled, ‘MEGA: Make Economics Great Again’, and sets out the framing of the rest of the book, which is to apply the methods of economic analysis to a range of big problems. Thus there are chapters on immigration and trade, technology, climate change, preferences and social choice, the welfre state. The model for each chapter is a beautifully clearly-written and quite balanced summary of the frontier of economic research and therefore of the conclusions economic expertise can justify when it comes to these major challenges. This makes it, by the way, an excellent teaching resource. There is relatively little about either the RCT methodology or behavioural economics for which the duo are famous, although these of course feature. A lot of the whole economic toolkit is covered.

For example, the chapter on immigration and its impact on the host country and its workers is a model of clarity in setting out why migrants may be complements rather than substitutes for native born workers, and what choices profit-maximizing employers might make. It gives the example of the 1964 expulsion of Mexican workers from California, on the grounds that they depressed local wages. “Their exit did nothing for the natives: wages and employment did not go up.” The reason was that farmers mechanized instead. For instance, adoption of tomato harvesting machines rose from zero in 1964 to 100% in 1967. The farmers also switched to crops for which harvesting machines were available, and for some time stopped growing others such as lettuce and strawberries.

The chapter on the likely impact of technology on jobs is similarly clear and yet also notes that there is scant consensus among economists on this. I thought this was the weakest chapter, but perhaps that’s because it’s my area and they try to cover a vast literature – taking in competition policy and inequality due to technology and positional goods and the nature of economic growth….. One could expend thousands of words on these. 🙂

The book is a bestseller and derservedly so. My big reservation about it is the way this demonstration of the analytical power of economic expertise lands in these ‘we’ve had enough of experts’ times. This is as much a matter of tone as content, but I found statements like this actually a bit uncomfortable: “This underscores the urgent need to set ideology aside and advocate for things most economists agree on, based on the recent research.” I have come to believe this deep urge among economists to separate the positive and normative – dating back to Lionel Robbins – is a mistake. Dani Rodrik’s Economics Rules, another excellent overview of ‘good economics’ is much more nuanced and less economics-imperialistic in this respect.

Economic analysis rocks, as does evidence, but we have to engage with the ideology too. Good economics is about more than technical expertise.

Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps