AI needs all of us

There’s no way I can be unbiased about Verity Harding’s new book AI Needs You: How we can change AI’s future and save our own, given that it began with a workshop Verity convened and the Bennett Institute hosted in Cambridge a few years ago. The idea – quite some time before the current wave of AI hype, hope and fear – was to reflect on how previous emerging disruptive technologies had come to be governed. After some debate we settled on space, embryology, and ICANN (the internet domain naming body), as between them these seemed to echo some of the issues regarding AI.

These discussions set the scene for Verity’s research into the detailed history of governance in each of these cases, and the outcome is a fascinating book that describes each in turn and reflects on the lessons for us now. The overall message is that the governance and use of technology in the public interest, for the public good, is possible. There is no technological determinism, nor any trade-off between public benefit and private innovation. The ‘Silicon Valley’ zeitgeist of inevitability, the idea that the tech is irresistible and society’s task is to leave its management to the experts, is false.

The implication of this – and hence the book’s title – is that: “Understanding that technology – how it gets built, why, and by whom – is critical for anyone interested in the future of our society.” And hence the ‘Needs You’ in the title. How AI develops, what it is used for an how – these are political questions requiring engaged citizens. This is why the historical examples are so fascinating, revealing as they do the messy practicalities and contingency of citizen engagement, political debate, quiet lobbying, co-ordination efforts, events and sheer luck. The embryology example is a case in point: the legislation in the UK was based on the hard work of the Warnock Commission, its engagement with citizens, tireless efforts to explain science; but also on years of political debate and a key decision by Mrs Thatcher about its Parliamentary progress. The resulting legislation has since stood the test of time and also set an ethical and regulatory framework for other countries too. The lesson is that the governance of AI will not be shaped by clever people designing it, but as the outcome of political and social forces.

The book is beautifully written and a gripping read (more than you might expect for a book about regulating technology). There are quite a few new books on AI out this spring, and there are others I’ve read in proof that are also excellent; but this will definitely be one of the ones that stands the test of time. Not for nothing did Time magazine name Verity as one of the 100 most influential people in AI. She is now leading a Bennett Institute Macarthur Foundation-funded project on the geopolitics of AI. I’ll be in conversation with her at Waterstones in Cambridge on 14th March.

Image

 

Translation

Translating Myself and Others by Jhumpa Lahiri was a left-field choice for me, a book of essays about writing in English and then Italian and translating her own texts – and those of others. But I enjoyed it, not least because it made me think about the English-to-English translation needed in inter-disciplinary work. We use the same word for subtly or even significantly different concepts. Capital is an obvious example, but also discounting, efficiency, optimization, rational and many others. The first stage of any project with people from other backgrounds is a translation stage. Hard work, but also so satisfying when there are moments of illumination of how other people think about a common question.

Screenshot 2024-03-02 at 11.14.05