AIs as the best of us

Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.

The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).

Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”

As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).

Screenshot 2024-04-03 at 09.46.09

 

Digital design

Over the holiday weekend I read (among other things*) Digital Design: A History by Steven Eskilson. I enjoy reading design books in general – a window into a more glamorous specialism than economics. This one covers a range of aspects, from the design of gadgets (from the IBM Selectric typewriter to Apple’s dominance in this arena) to fonts to web design to data visualisation to architecture. So it’s quite eclectic, and includes using digital tools to design (as in architecture) as well as the design of digital artefacts. But one theme that emerges across all these areas is the lasting influence of Bauhaus (about which I read a terrific book last year,  a biography of Gropius by Fiona McCarthy, a while back). Digital Design also a very handsome book with loads of images. Screenshot 2024-04-02 at 11.48.02* Robin Ince’s Bibliomaniac and two thirds of The Currency of Politics by Stefan Eich, which I’ll write about another time.

 

AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50

We’re all doomed – maybe

I read Peter Turchin’s (2023) End Times: Elites, Counter-Elites and the Path of Political Disintegration on a long flight yesterday (I’m at Stanford for a couple of workshops). I’m not sure what to make of it. It’s well-written and an engaging read. The basic idea that there is a pendulum in the strength and health of polities, of generation-long good times and bad times, seems valid enough. The idea that one can model these computationally, I find a bit weird – speaking as one who spent some years early in her career modelling the UK and other economies computationally. Predicting outcomes from those models a year ahead that was tricky enough. This kind of system-wide modelling involves a great deal of judgement whereas this book claims an implausible degree of automaticity. As a sceptic about macroeconomic-modelling I’m a natural sceptic about – whatever we are going to call this – metaeconomic-modelling?

Turchin’s dynamics are driven by two phenomena: the immiserisation of the working class as the labour share of the economy declines, due to a ‘wealth pump’ as successful elites rig the economy to grab ever more of the value; and the over-production of elites who have to compete to benefit from the wealth pump. After a cycle of growth and integration, these mechanisms give way to a cycle of conflict and chaotic politics, driven by a coalition of the impoverished (Trump voters from the former manufacturing heartlands) and the not successful-enough elites (J.D.Vance).

This is a neat model, and seems to correspond to today’s US reality, but I have questions. For example, if expanding education is ‘over-production of elites’, what are we to make of the role of expanded education in technical progress and growth – is periodic conflict simply a cost of investment in human capital that has to be borne? Where does the role of demand in creating jobs for these productive people fit in? Do we need a war to kill off the excess PhDs and return to a stable, integrative phase? The role of excess labour supply in Turchin’s model seems to involve the lump of labour fallacy. All the (UK) evidence I know on immigration is that the effects on local wages and employment depend on (a) how complementary or not the skills of migrants are to local skills and (b) the state of the business cycle. Additional labour supply does not automatically mean immiserisation of workers.

There’s also a long quotation from Jack Goldstone to the effect that the population had grown substantially for 50 years before every major revolution and rebellion between 1500 and 1900. Does this mean the model will predict no more revolutions outside sub-Saharan Africa as populations are now in decline? It’s also a very US centric book despite using historical examples from many countries. For the instance UK labour share has not fallen like that in the US, although median wages have certainly stagnated.

I suppose in the end how seriously you take this kind of modelling depends on your belief about the extent to which human societies can learn and thus escape from past patterns. For what it’s worth, the book predicts the 2020s in the US will stay tumultuous. Of course, one doesn’t need a model to see this. I visit here once or twice a year but might stay away for a bit after early November 2024.

Screenshot 2024-03-17 at 14.55.02

Hoping, not doing.

My in-pile of books is a bit random at the moment. I just finished a posthumously-published set of essays by Richard Rorty, What Can We Hope For? It’s a strikingly passive title (as indeed was Lenin’s What is to be Done? although less so), and the essays have a notably pessimistic tone. Rorty is known for his prescience about the threats to American democracy posed by grotesque inequality, the crumbling of jobs and the fabric of middle America and authoritarian tendencies. He famously warned of the chance of a strongman dictator 20 years before Trump’s 2016 election. Rorty is known also for his critique of the frivolity of the campus-led American left as well as of the viciousness of the American right. These features are prominent in many of the essays.

However, I enjoyed the earlier section consisting of philosophical essays more than the (then-)current political commentary in the later sections. I haven’t read much by Rorty but am inclined to like his pragmatism: “It does not matter whether we can get consensus on moral principles,” he writes, “As long as we can get it on practices.” And, “The fact that moralities are, among other things, local systems of social control does no more to cast doubt on moral progress than the fact that scientific breakthroughs are financed by people hoping for improved technology casts doubt on progress in the ‘hard’ sciences.” He has some nice comments about the folly of the hardline positivist distinction between the rational and everything else – imagination, emotion – in making sense of the world and particularly political decision-making.

The back of the book claims the essays offer creative solutions for solving the world’s and especially America’s problems. I didn’t spot the solutions, unless ‘creative’ is code for ‘implausible’. Hence, I suppose, the title. Hoping not doing.

Screenshot 2024-03-15 at 10.53.00