Killer doubts

After a fantastic recent workshop I read Merchants of Doubt by Naomi Oreskes and Erik Conway. I’m late to this (it was first published in 2010) & had read some reviews, but this didn’t make the book any less shocking. The subtitle is ‘How a handful of scientists obscured the truth on issues from tobacco smoke to global warming.’ Part of the scandal is that it was literally a handful of people who abused their scientific credentials in one area to persuade the media and public that there was meaningful scientific doubt about the lethalness of smoking or the reality of global warming. These were mainly men who had worked as cold warrior scientists, mostly physicists, but nevertheless commenting in an authoritative tone on biomedical or environmental science.

The book’s hypothesis is that the doubt-merchants – consulting for big companies or paid through free-market foundations the corporations had established and funded – were scientists turned anti-science because they were opposed to regulation. Regulation was what communists did; capitalist corporations should be free to do what they want. The tactics used in each of the examples where there was a solid academic consensus about the issue – the hole in the ozone layer, tobacco, anthropogenic global warming, acid rain, etc – was a trifecta: point out that there is always some scientific doubt about something (indeed – science progresses by skeptical enquiry and there are always more avenues to explore); argue that the costs of remedying the issue are disproportionately high; attack the scientists as ideological and mainly interested in progressing their own careers (those elitist ‘experts’). Scientists were portrayed as at best the enforcers of the nanny state, wanting to prevent people enjoying their natural liberties.

I can sort of understand the psychology of conservative Cold Warriors who, over time, burrow ever further into their own reality bunker – recall, it is literally the same small group of people in the US over time, joined in the 1980s by some UK free marketeers. I am less able to understand the media, however. The book describes the corporate lobbyists’ use of the ‘fairness doctrine’ in US TV broadcasting, requiring balanced coverage of controversial matters. The issues described here were not controversial – the controversy was manufactured. It was not hard in these cases to discern where the opinion of the great majority of scientists lay. This is literally a core part of the job of journalists and their editors – figuring out how to tease something close to fact out of a conflicting set of statements by people with their own agendas and interests.

Yet the false balance is widespread in journalism still. In 2011 on the BBC Trust we commissioned a report from Professor Steve Jones on the BBC’s science coverage and he highlighted the danger of this false balance interpretation of its editorial requirement of ‘due impartiality’. He wrote: “Attempts to give a place to anyone, however unqualified, who claims interest can make for false balance,” and the opposite of impartiality. It’s a few years old but I commend it still, along with the sobering Merchants of Doubt. After all, in some of the cases described in the book, people died when sufficient doubt was sewn in the public mind to inhibit regulatory action; and when it comes to global warming the stakes are huge.

51g4fajSaaL._SX326_BO1,204,203,200_

 

Goliaths and populists

Matt Stoller’s Goliath: The 100 year war between monopoly and democracy ended up being a different book from the one I’d expected: I thought it was going to be about (lack of) competition in digital markets but in fact it’s a broader economic/political economy history of 20th century America in terms of the ebb and flow of the power of big business. Anti-trust policy is just one part of the story.

It’s an interesting story, one of the steady accumulation of economic and political power in a few hands, interrupted periodically by a decisive political, often populist, reset in favour of the small farmer or business (occasionally even the consumer). It’s also a story told from an anti-big business perspective, based on an analysis of the corruption of political power by accumulated money; which is fine, but it does make it a binary tale of heroes and villains. This includes one hero I hadn’t heard of before, Representative Wright Patman, a long-serving congressman on the side of the little people over the decades.

And some surprising villains. Chief among them is J.K. Galbraith, for his love of technocracy and big enterprises – after all it’s easier to direct an economy of big firms than little ones, and he was responsible for implementing price controls during the war. Galbraith wrote: There must be some element of monopoly in an industry if it is to be progressive.” He saw small firms and farmers as inherently conservative. Other public intellectuals such as Richard Hofstadter shared the perspective of small business being reactionary and inefficient, so the liberals become the corporatists – in contrast to the 1920s and 30s when those on the left (to be anachronistic about it) were standing up for the small guy. Thus, “Politics was no longer an avenue for structuring society but rather a means of ratifying what technologically driven organizations already saw as an optimal arrangement,” Stoller writes of the revival of corporatism in the 60s and 70s.

There are unsurprising villains too, of course, from Carnegie, Morgan and Mellon to Citibank’s Walter Wriston reviving the power of the financial sector by successfully unravelling the financial regulation that had been constraining the big banks since the Depression. Indeed, my big takeaway from the book was the need to constrain big finance, far more than big tech or big anything else.

One of the interesting things about the long-run perspective is the way it makes clear the pendulum in policy, from populist anti-big business to pro-big business phases in either oligarchic or corporatist modes. As a history of the US economy through this lens, Goliath is very stimulating. It is very US-centric, however – and this focus carries over into today’s debate about anti-trust or indeed other areas of economic policy. America matters, particularly in digital markets for obvious reasons, but it is also highly distinctive.

Trying to think through the extent to which the small guy/big business pendulum carries over to the UK or elsewhere in Europe, I concluded that it does to a degree. We had conglomeration in the 60s and 70s too. But in recent times European anti-trust practice has differed greatly from the US – I’m open to correction as this is a question of institutional history but I don’t think European competition authorities ever went the full Chicago School. (Just as inequality has increased everywhere but only the US is truly back to the Gilded Age.)

All this is by way of saying Goliath is well worth a read, bearing in mind its firmly – proudly – slanted perspective. It has all kinds of detail I didn’t know. I found its contrarian view about Galbraith, aligning him with Peter Drucker for example, absolutely fascinating & am still mulling over the relationship between technocracy and scale.

41kQYq9p2ZL._SX331_BO1,204,203,200_

Weightlessness Redux

The weeks are flying past. I’ve read recently an array of non-economics books (Owen Hatherley’s Trans-Europe Express, Susan Orlean’s The Library Book, Francis Spufford’s True Stories, a couple of the re-issued Maigrets) and also Matt Stoller’s Goliath and Andrew McAfee’s More from Less. I’ll review Goliath in my next post.

More from Less: the surprising story of how we learned to prosper using fewer resources – and what happens next (to give it the full overly wordy subtitle) is presumably aimed at the airport bookshop market. It’s written in a very accessible way and it summarises a lot of interesting research – although not McAfee’s own.

The main point is that the material resource intensity of economic growth has been declining in the rich western economies. This is set in an account of the origins of modern capitalism in the Industrial Revolution – emphasising the importance of ideas and contestability through markets – and the urgent debate about the trade-off between humans gettern better of (escaping the Malthusian trap) and the damaging environmental impact of growth. The dematerialisation of the economy is helping improve the terms of that trade-off.

A major difficulty I have in reviewing this is that, although it’s an enjoyable read, I wrote a book making the same point in 1997, The Weightless World (out of print, free pdf here). There’s no reason at all McAfee should have read it as I was a nobody, and it was a long time ago. But it does mean that (perhaps uniquely) I can’t find anything that’s new in More From Less. The research he cites concerning dematerialisation dates from 2012 (Chris Goodall) and 2015 (Jesse Ausubel, The Return of Nature) – so this is another example of a phenomenon being discovered twice; because there was similar work in the mid-1990s on material flow accounts, on which I based my book. Alan Greenspan even made a speech about it in 1996. It’s a noteworthy phenomenon so I hope McAfee does alert new readers to it. He puts far more emphasis on environmental challenges than I did back in the more innocent 1990s; my focus was more on the socio-economic consequences of a dematerializing economy.

However, the weightlessness or dematerialization phenomenon doesn’t deliver a knockout blow to the degrowth argument that it is not enough to have a reduced but still positive material intensity to growth. Tim Jackson is the most thoughtful advocate of this argument – see this recent essay in Science. It may be that we need to find a way to read more lightly on the planet in absolute terms as well as relative ones, although I’ll welcome weightless growth as better than the weighty alternative. And – as even no growth is politically divisive, never mind degrowth – the issues raised in More From Less are difficult and important ones.

41xPYgyWjVL._SX331_BO1,204,203,200_

Good economics and expertise

“A recurring theme of this book is that it is unreasonable to expect markets to always deliver outcomes that are just, acceptable, or even efficient.” So write Abhijit Banerjee and Esther Duflo at the start of one chapter of their new book – fabulously timed to coincide with the Nobel they just shared with Michael Kremer – Good Economics for Hard Times: Better Answers to Our Biggest Problems. I really enjoyed reading the book but it left me with one nagging question, of which more later.

416y746UZDL._SX321_BO1,204,203,200_The Introduction is titled, ‘MEGA: Make Economics Great Again’, and sets out the framing of the rest of the book, which is to apply the methods of economic analysis to a range of big problems. Thus there are chapters on immigration and trade, technology, climate change, preferences and social choice, the welfre state. The model for each chapter is a beautifully clearly-written and quite balanced summary of the frontier of economic research and therefore of the conclusions economic expertise can justify when it comes to these major challenges. This makes it, by the way, an excellent teaching resource. There is relatively little about either the RCT methodology or behavioural economics for which the duo are famous, although these of course feature. A lot of the whole economic toolkit is covered.

For example, the chapter on immigration and its impact on the host country and its workers is a model of clarity in setting out why migrants may be complements rather than substitutes for native born workers, and what choices profit-maximizing employers might make. It gives the example of the 1964 expulsion of Mexican workers from California, on the grounds that they depressed local wages. “Their exit did nothing for the natives: wages and employment did not go up.” The reason was that farmers mechanized instead. For instance, adoption of tomato harvesting machines rose from zero in 1964 to 100% in 1967. The farmers also switched to crops for which harvesting machines were available, and for some time stopped growing others such as lettuce and strawberries.

The chapter on the likely impact of technology on jobs is similarly clear and yet also notes that there is scant consensus among economists on this. I thought this was the weakest chapter, but perhaps that’s because it’s my area and they try to cover a vast literature – taking in competition policy and inequality due to technology and positional goods and the nature of economic growth….. One could expend thousands of words on these. 🙂

The book is a bestseller and derservedly so. My big reservation about it is the way this demonstration of the analytical power of economic expertise lands in these ‘we’ve had enough of experts’ times. This is as much a matter of tone as content, but I found statements like this actually a bit uncomfortable: “This underscores the urgent need to set ideology aside and advocate for things most economists agree on, based on the recent research.” I have come to believe this deep urge among economists to separate the positive and normative – dating back to Lionel Robbins – is a mistake. Dani Rodrik’s Economics Rules, another excellent overview of ‘good economics’ is much more nuanced and less economics-imperialistic in this respect.

Economic analysis rocks, as does evidence, but we have to engage with the ideology too. Good economics is about more than technical expertise.

Thinking about AI

There are several good introductions to AI; the three I’ve read complement each other well. Hannah Fry’s Hello World is probably the best place for a complete beginner to start. As I said in reviewing it, it’s a very balanced introduction. Another is Pedro Domingo’s The Master Algorithm, which is more about how machine learning systems work, with a historical perspective covering different approaches, and a hypothesis that in the end they will merge into one unified approach. I liked it too, but it’s a denser read.

Now I’ve read a third, Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. 41-m7+LkdHL._SX309_BO1,204,203,200_It gives a somewhat different perspective by describing wonderfully clearly how different AI applications actually work, and hence helps understand their strengths and limitations. I would say these are the most illuminating simple yet meaningful explanations I’ve read of – for example – reinforcement learning, convolutional neural networks, word vectors etc.I wish I’d had this book when I first started reading some of the AI literature.

One thing that jumps out from the crystal clear explanations is how dependent machine learning systems are on humans – from the many who spend hours tagging images to the super-skilled ‘alchemists’ who are able to build and tune sophisticated applications: “Often it takes a kind of cabbalistic knowledge that students of machine learning gain both from their apprenticeships and from hard-won experience.”

The book starts with AI history and background, then covers image recognition and similar applications. It moves on to issues of ethics and trust, and then natural language processing and translation. The final section addresses the question of whether artificial general intelligence will ever be possible, how AI relates to knowledge and to consciousness. These are open questions though I lean toward the view – as Mitchell does –  that there is something important about embodiment for understanding in the sense that we humans mean it. Mitchell argues that deep learning is currently hitting a ‘barrier of meaning’, while being superb at narrowly defined tasks of a certain kind. “Only the right kind of machine – one that is embodied and active in the world – would have human level intelligence in its reach. … after grappling with AI for many years, I am finding the embodiment hypothesis increasingly compelling.”

The book then ends with brief reflections on a series of questions – when will self-driving cars be common, will robots take all the jobs, what are the big problems left to solve in AI.

Together, the three books complement each other and stand as an excellent introduction to AI, cutting through both the hype and the scary myths, explaining what the term covers and how the different approaches work, and raising some key questions we will all need to be thinking about in the years ahead. The AI community is well-served by these thoughtful communicators. A newcomer to the literature could read these three and be more than well-enough informed; my recommended order would be Fry, Mitchell, Domingo. Others may know of better books of course – if so, please do comment & I’ll add an update.

UPDATE

The good folks on Twitter have recommended the following:

Human Compatible by Stuart Russell (I briefly forgot how much I enjoyed this.)

Rebooting AI by Gary Marcus

Parallel Distributed Processing by David Rummelhart & others (looks technical….)

The Creativity Code by Marcus Du Sautoy

Machine, Platform, Crowd by Andrew McAfee  Erik Brynjolfsson (also reviewed on this blog)

AI: A Modern Approach by Stuart Russell and Peter Norvig (a key textbook)