Goliaths and populists

Matt Stoller’s Goliath: The 100 year war between monopoly and democracy ended up being a different book from the one I’d expected: I thought it was going to be about (lack of) competition in digital markets but in fact it’s a broader economic/political economy history of 20th century America in terms of the ebb and flow of the power of big business. Anti-trust policy is just one part of the story.

It’s an interesting story, one of the steady accumulation of economic and political power in a few hands, interrupted periodically by a decisive political, often populist, reset in favour of the small farmer or business (occasionally even the consumer). It’s also a story told from an anti-big business perspective, based on an analysis of the corruption of political power by accumulated money; which is fine, but it does make it a binary tale of heroes and villains. This includes one hero I hadn’t heard of before, Representative Wright Patman, a long-serving congressman on the side of the little people over the decades.

And some surprising villains. Chief among them is J.K. Galbraith, for his love of technocracy and big enterprises – after all it’s easier to direct an economy of big firms than little ones, and he was responsible for implementing price controls during the war. Galbraith wrote: There must be some element of monopoly in an industry if it is to be progressive.” He saw small firms and farmers as inherently conservative. Other public intellectuals such as Richard Hofstadter shared the perspective of small business being reactionary and inefficient, so the liberals become the corporatists – in contrast to the 1920s and 30s when those on the left (to be anachronistic about it) were standing up for the small guy. Thus, “Politics was no longer an avenue for structuring society but rather a means of ratifying what technologically driven organizations already saw as an optimal arrangement,” Stoller writes of the revival of corporatism in the 60s and 70s.

There are unsurprising villains too, of course, from Carnegie, Morgan and Mellon to Citibank’s Walter Wriston reviving the power of the financial sector by successfully unravelling the financial regulation that had been constraining the big banks since the Depression. Indeed, my big takeaway from the book was the need to constrain big finance, far more than big tech or big anything else.

One of the interesting things about the long-run perspective is the way it makes clear the pendulum in policy, from populist anti-big business to pro-big business phases in either oligarchic or corporatist modes. As a history of the US economy through this lens, Goliath is very stimulating. It is very US-centric, however – and this focus carries over into today’s debate about anti-trust or indeed other areas of economic policy. America matters, particularly in digital markets for obvious reasons, but it is also highly distinctive.

Trying to think through the extent to which the small guy/big business pendulum carries over to the UK or elsewhere in Europe, I concluded that it does to a degree. We had conglomeration in the 60s and 70s too. But in recent times European anti-trust practice has differed greatly from the US – I’m open to correction as this is a question of institutional history but I don’t think European competition authorities ever went the full Chicago School. (Just as inequality has increased everywhere but only the US is truly back to the Gilded Age.)

All this is by way of saying Goliath is well worth a read, bearing in mind its firmly – proudly – slanted perspective. It has all kinds of detail I didn’t know. I found its contrarian view about Galbraith, aligning him with Peter Drucker for example, absolutely fascinating & am still mulling over the relationship between technocracy and scale.

41kQYq9p2ZL._SX331_BO1,204,203,200_

Weightlessness Redux

The weeks are flying past. I’ve read recently an array of non-economics books (Owen Hatherley’s Trans-Europe Express, Susan Orlean’s The Library Book, Francis Spufford’s True Stories, a couple of the re-issued Maigrets) and also Matt Stoller’s Goliath and Andrew McAfee’s More from Less. I’ll review Goliath in my next post.

More from Less: the surprising story of how we learned to prosper using fewer resources – and what happens next (to give it the full overly wordy subtitle) is presumably aimed at the airport bookshop market. It’s written in a very accessible way and it summarises a lot of interesting research – although not McAfee’s own.

The main point is that the material resource intensity of economic growth has been declining in the rich western economies. This is set in an account of the origins of modern capitalism in the Industrial Revolution – emphasising the importance of ideas and contestability through markets – and the urgent debate about the trade-off between humans gettern better of (escaping the Malthusian trap) and the damaging environmental impact of growth. The dematerialisation of the economy is helping improve the terms of that trade-off.

A major difficulty I have in reviewing this is that, although it’s an enjoyable read, I wrote a book making the same point in 1997, The Weightless World (out of print, free pdf here). There’s no reason at all McAfee should have read it as I was a nobody, and it was a long time ago. But it does mean that (perhaps uniquely) I can’t find anything that’s new in More From Less. The research he cites concerning dematerialisation dates from 2012 (Chris Goodall) and 2015 (Jesse Ausubel, The Return of Nature) – so this is another example of a phenomenon being discovered twice; because there was similar work in the mid-1990s on material flow accounts, on which I based my book. Alan Greenspan even made a speech about it in 1996. It’s a noteworthy phenomenon so I hope McAfee does alert new readers to it. He puts far more emphasis on environmental challenges than I did back in the more innocent 1990s; my focus was more on the socio-economic consequences of a dematerializing economy.

However, the weightlessness or dematerialization phenomenon doesn’t deliver a knockout blow to the degrowth argument that it is not enough to have a reduced but still positive material intensity to growth. Tim Jackson is the most thoughtful advocate of this argument – see this recent essay in Science. It may be that we need to find a way to read more lightly on the planet in absolute terms as well as relative ones, although I’ll welcome weightless growth as better than the weighty alternative. And – as even no growth is politically divisive, never mind degrowth – the issues raised in More From Less are difficult and important ones.

41xPYgyWjVL._SX331_BO1,204,203,200_

Good economics and expertise

“A recurring theme of this book is that it is unreasonable to expect markets to always deliver outcomes that are just, acceptable, or even efficient.” So write Abhijit Banerjee and Esther Duflo at the start of one chapter of their new book – fabulously timed to coincide with the Nobel they just shared with Michael Kremer – Good Economics for Hard Times: Better Answers to Our Biggest Problems. I really enjoyed reading the book but it left me with one nagging question, of which more later.

416y746UZDL._SX321_BO1,204,203,200_The Introduction is titled, ‘MEGA: Make Economics Great Again’, and sets out the framing of the rest of the book, which is to apply the methods of economic analysis to a range of big problems. Thus there are chapters on immigration and trade, technology, climate change, preferences and social choice, the welfre state. The model for each chapter is a beautifully clearly-written and quite balanced summary of the frontier of economic research and therefore of the conclusions economic expertise can justify when it comes to these major challenges. This makes it, by the way, an excellent teaching resource. There is relatively little about either the RCT methodology or behavioural economics for which the duo are famous, although these of course feature. A lot of the whole economic toolkit is covered.

For example, the chapter on immigration and its impact on the host country and its workers is a model of clarity in setting out why migrants may be complements rather than substitutes for native born workers, and what choices profit-maximizing employers might make. It gives the example of the 1964 expulsion of Mexican workers from California, on the grounds that they depressed local wages. “Their exit did nothing for the natives: wages and employment did not go up.” The reason was that farmers mechanized instead. For instance, adoption of tomato harvesting machines rose from zero in 1964 to 100% in 1967. The farmers also switched to crops for which harvesting machines were available, and for some time stopped growing others such as lettuce and strawberries.

The chapter on the likely impact of technology on jobs is similarly clear and yet also notes that there is scant consensus among economists on this. I thought this was the weakest chapter, but perhaps that’s because it’s my area and they try to cover a vast literature – taking in competition policy and inequality due to technology and positional goods and the nature of economic growth….. One could expend thousands of words on these. 🙂

The book is a bestseller and derservedly so. My big reservation about it is the way this demonstration of the analytical power of economic expertise lands in these ‘we’ve had enough of experts’ times. This is as much a matter of tone as content, but I found statements like this actually a bit uncomfortable: “This underscores the urgent need to set ideology aside and advocate for things most economists agree on, based on the recent research.” I have come to believe this deep urge among economists to separate the positive and normative – dating back to Lionel Robbins – is a mistake. Dani Rodrik’s Economics Rules, another excellent overview of ‘good economics’ is much more nuanced and less economics-imperialistic in this respect.

Economic analysis rocks, as does evidence, but we have to engage with the ideology too. Good economics is about more than technical expertise.

Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)

Configuring the lumpy economy

Somebody at the University of Chicago Press has noticed how my mind works, and sent me Slices and Lumps: Division + Aggregation by Lee Anne Fennell. It’s about the implications of the reality that economic resources are, well, lumpy and variably slice-able. The book starts with the concept of configuration: how to divide up goods that exist in lumps to satisfy various claims on them, and how to put together ones that are separate to satsify needs and demands. The interaction between law (especially property rights) and economics is obvious – the author is a law professor. So is the immediate implication that marginal analysis is not always useful.

This framing in terms of configuration allows the book to range widely over various economic problems. About two thirds of it consists of chapters looking at the issues of configuration in specific contexts such as financial decisions, urban planning, housing decisions. The latter for example encompasses some physical lumpiness or indivisibilities and some legal or regulatory ones. Airbnb – where allowed – enables transactions over excess capacity due to lumpiness, as home owners can sell temporary use rights.

The book is topped and tailed by some general reflections on lumping and slicing. The challenges are symmetric. The commons is a tragedy because too many people can access resources (slicing is too easy), whereas the anti-commons is too because too many people can block the use of resources. Examples of the latter include redevelopment of a brownfield site where there are too many owners to get to agree to sell their land but also patent thickets. Property rights can be both too fragmented and not fragmented enough. There are many examples of the way policy can shape choice sets by changing them to be more or less chunky – changing tick sizes in financial markets, but also unbundling albums so people can stream individual songs. Fennell writes, “At the very least, the significance of resource segmentation and choice construction should be taken into account in thinking innovatively about how to address externalities.” Similarly, when it comes to personal choices, we can shape those by altering the units of choice – some are more or less binary (failing a test by a tiny margin is as bad as failing by a larger one), others involve smaller steps (writing a few paragraphs of a paper).

Woven through the book, too, are examples of how digital technology is changing the size of lumps or making slicing more feasible – from Airbnb to Crowd Cow, “An intermediary that enables people to buymuch smaller shares of a particular farm’s bovines (and to select desired cuts of meat as well),” whereas few of us can fit a quarter of a cow in the freezer. Fennell suggests renaming the ‘sharing economy’ as the ‘slicing economy’. Technology is enabling both finer physical and time slicing.

All in all, a very intriguing book.310j5Yg+swL._SX331_BO1,204,203,200_Slices and Lumps