Do cities have a future?

For obvious reasons there is a lot of debate about the future of cities, after 18 months when many of us have been stuck at home, and much speculation about whether there will ever be a return to commuting into dense city centres for work. The answer given in Survival of the City by Ed Glaeser and David Cutler is yes – probably. The basic reason is the role face-to-face interactions play in creating economic value, all the more so as increasing automation changes the kinds of jobs left to humans. Creativity, care, tacit knowledge – all require personal interaction. These long months on Zoom have depleted past stocks of social and organisational capital.

The ‘probably’ part, though, is that how successful cities will be depends on some key issues. Foremost among them is managing infectious disease. The book starts with epidemics in history and documents the ways cities have battled their effects, such as clean water and adequate sewage, not to mention the broader provision of good public health systems. For example, drug epidemics and pollution are usually urban blights. However, other aspects of city management are important too. The book singles out crime prevention, adequate supply of housing, and education provision too, for example (not least because the average level of education in a city is a strong predictor of life expectancy for its low income inhabitants even though they are generally not the most highly educated).

In a nutshell, the authors write: “A central theme of this book is that the vulnerability of large, dense, interconnected cities requires an effective, pro-active public sector.” Indeed. And the book ends with a series of recommendations: a “NATO for health”, effective across borders in a way WHO is not; better public health provision; education services that improve opportunities for those who are currently losing out in city life; and (this is a very US-focused book) criminal justice system reform.

I enjoyed reading it, not least because it speaks to my own instincts or prejudice about the role of cities. Lots of great detail too. Who knew the US health system might have been reformed in the 1960s if the then chair of the House Ways and Means Committee, Wilbur Mills, had not been caugh drunk driving with an Argentinian stripper called Fanne Foxe at 2am, and tried to escape the police by jumping into the river? Or that Bayer used to sell heroin as a safe alternative to opium. Or that ‘watered stock’ literally used to be cattle given a lot of water to drink so that they appeared to be fatter?

I’m talking to the authors on 20th October as part of Bristol’s Festival of the Future City – bound to be an interesting discussion.

41480KywbNL._SX327_BO1,204,203,200_

Reforming economics – the heterodox manifesto

Like buses, books about the state of economics seem to come along together. Tomorrow sees the publication of my latest, Cogs and Monsters: What Economics Is and What It Should Be – more on that tomorrow. Do join the launch event if you can.

Meanwhile, I read Steve Keen’s recent book The New Economics: A Manifesto. His focus is far more on macroeconomics than is mine, and some of his arguments (about macrodynamics and Minsky’s insights) will be familiar to readers of his earlier book. This new book also covers the environment: its critique of the Nordhaus approach to integrated modelling, that led to a downplaying of the costs of delaying action against climate change, will find hearty agreement among the environmental economists I know. However, even very mainstream people like Daron Acemoglu have joined the call for a more urgent economics of climate change.

I like the book, which is aimed at students embarking on an economics degree, to open their eyes to the limitations of what they’re likely to be taught. Yet having said that I agree with a lot of the arguments in The New Economics, I don’t share the sense that a monolithic ‘mainstream’ of neoclassical economists is determined to resist change becuase they are bad or stupid people. But then, I don’t regard myself as heterodox, and I’m always keen to point out how much brilliant work is going on in the subject, albeit generally in applied micro, because there’s a lot of misundertaning about what economists generally do. My diagnosis is a combination of not stopping to think and institutional inertia (top 5 journals, promotion criteria, disciplinary silos etc.) But more on my book in tomorrow’s post!

51hep0CCdeS._SY291_BO1,204,203,200_QL40_ML2_

 

Our robot overlords?

I’ve chatted to Martin Ford about his new book Rule of the Robots for a Bristol Festival of Ideas event – the recording will be out on 6 October.

It’s a good read and quite a balanced perspective on both the benefits and costs of increasingly widespread use of AI, so a useful intro to the debates for anyone who wants an entry into the subject. There are lots of examples of applications with huge promise such as drug discovery. The book also looks at potential job losses from automation and issues such as data bias.

It doesn’t much address policy questions with the exception of arguing in favour of UBI. Regular readers of this blog will know I’m not a fan, as UBI seems like the ultimate Silicon Valley, individualist, solution to a Silicon Valley problem. I’d advocate policies to tilt the direction of automation, as there’s a pecuniary externality: individual firms don’t factor in the aggregate demand effects of their own cost-reduction investments. And also policies that address collective needs – public services, public transport, as well as a fair and sufficiently generous benefits system. No UBI in practice would ever be set high enough to address poverty and the lack of good jobs: if you want to pay everyone anything like average income, you’d have to collect taxes at a level more than average income.

But that debate is what the Bristol event is all about!

51V4pDtvYDL._SY291_BO1,204,203,200_QL40_ML2_

Unwound

I confess to having read a couple of novels this week (The Truth About the Harry Quebert Affair and Tokyo Year Zero) but prior to this, on my travels to and from Italy, I read George Packer’s Last Best Hope: America in Crisis and Renewal. I’d bought it because his big book on US decline, The Unwinding, was so good. This is a short diagnosis of the US as a failed state ending with – despite everything – a positive vision of how to get to the renewal part. In this, Packer casts back to the New Deal and Great Society, particularly some of its relatively unknown architects (Frances Perkins, Bayard Rustin).

It was striking how US-specific the diagnosis is, even though so many of the headwinds are experienced elsewhere – deindustrialisation, geopolitical pressures, demographic and therefore political change, etxreme inequality. I think it is probably because the US is today, like Britain at the start of the 20th century, in such a highly distinctive situation. The same tides are washing over different sandcastles.

One surprise was finding myself sharing a theory of change with Milton Friedman. Quoted here, Friedman wrote: “Only a crisis brings – actual or perceived – produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That I believe is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.”

Anyway, Last Best Hope explores previous crises in order to identify what alternatives might now reduce divisions and restore democracy in the US. For all that Packer tries to end on an upbeat note, I’m not sure I believe this is possible. For those of us who grew up with the positive vision of American dynamism, egalitarianism, and opportunity (alongside all its obvious terrible flaws), this is pretty depressing. As Adam Tooze concludes his excellent new book, Shutdown (which I’ve reviewed for Democracy, due to be online soon), history has come for us all.

515Z43ICbgL._SY291_BO1,204,203,200_QL40_ML2_

 

 

Actually existing AI

For the first time in a year I managed to get abroad for a few days – the Ambrosetti Forum at the Villa D’Este on Lake Como – and apart from the inherent joy of being somewhere beautiful and sunny and foreign, it gave me plenty of reading time. One of the books I polished off is Kate Crawford’s excellent Atlas of AI. It’s a forensic exploration of the unseen structures shaping the way AI is being developed and deployed in the world, and it is fair to say she is prfoundly sceptical about whether ‘actually existing AI’ is serving society broadly as opposed to making a small minority of (mainly) men rich and powerful. “To understand how AI is fundamentally political,” she writes in the introduction, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide.”

The book starts with the material basis of the industry, in particular the extraction of rare earths and its voracious and growing consumption of energy. We all surely know about the energy appetite of crypto but one point I hadn’t really appreciated is this: “The amount of compute used to train a single AI model has increased by a factor of ten every year.” The next chapter goes on to discuss the extent to which AI depends on low-cost human labour. I think the way Amazon’s Mechanical Turk works is quite well known – a nice book about this was Ghost Work – but this chapter focuses on Amazon warehouses, image-labelling work (“the technical AI research community relies on cheap crowd-sourced labour for many tasks that can’t be done by machine”), and also – nice neologism – ‘fauxtomation’ when tasks are transferred from human workers to human customers: think ‘automated’ checkouts in shops. The chapter has a nice section discussing the role of time in business models: in an evolution of the industrial organisation of time, the continuing automation of economic activity is requiring humans to work ever-faster. The battle is for ‘time sovereignity’.

There is not surprisingly a chapter on data, underlining the point that it is profoundly relational, but also making the point that the reliance on ‘data’ downgrades other forms of knowledge, such as linguistic principles, and how little questioning there is of where it comes from and what it actually measures. I hadn’t realised that many computer science departments have not had ethical review processes on the basis that they do not have human subjects for their research. It’s also a bit of a shocker to realise that some widely used AI databases are palimpsests of older collections of data embedding unpalatable classifications and assumptions.

There’s a similarly shocking chapter on facial recognition – bad enough in the ungoverned way it’s being used but I hadn’t clocked the latest trend of using it to “identify” people’s moods. And the book winds up back at power. As Crawford writes, “We must focus less on ethics and more on power.” I couldn’t agree more. There are a gazillion ethics statements and loads of ethics research but it won’t change anything. I’d add incentives too, but given the concentration of all that compute, understanding the way AI and power structures interact will shape the kind of world we are in 10 years from now. Excellent book.

41UYHH+mC8L._SX336_BO1,204,203,200_