Humans and machines

My colleague Neil Lawrence’s new book, The Atomic Human: Understanding Ourselves in the Age of AI, is a terrific account of why ‘artificial intelligence’ is fundamentally different from embodied human intelligence – which makes it on the one hand an optimistic perspective, but on the other leads him to end with an alarming warning, that the potential of pervasive machine intelligence, “could be as damaging to our cultural ecosystem as our actions have been to the natural ecosystem.” The influence of AI on human society could be parallel to our adverse influence on the environment – no matter how good the intentions – because just as nature moves at the pace of evolutionary time so the interface between humans and nature has failed to take account of the damage humans cause, so the computer-human interface is characterised by this mismatch in information-processing speeds.

The book does not offer a handy list of actions to prevent the damage AI might do to us, but ends by warning about two things: the immense concentration of power in its development and use; and the use of automated decision-making in contexts where any judgement is essential – which is many contexts where uncertainty enters the picture. I rather fear those horses have bolted, though.

Most of the book is a fascinating account of both types of intelligence, AI and human cognition, using information theory as well as cognitive science to explain the profound differences. As he notes, “Shannon defined information as being separated from its context,” but humans need contextual understanding to communicate. Neil uses stories to provide context, to make what could be rather dry material more engaging, braiding the same examples (many from wartime: Bletchley Park, his grandfather’s D-Day experience alongside General Patton’s, the development of radar, missile testing…) through the text. Sometimes I found these confusing, but I have a very literal mind.

There have been lots of books about AI out this year, and I’ve generally enjoyed the ones I’ve read – although whatever you do, avoid Ray Kurzweil’s. I’d recommend adding this one to the to-read list, as it offers a fresh perspective on AI from a super-expert and super-thoughtful practitioner.

Screenshot 2024-06-22 at 11.18.14

Escape velocity?

I’ve read Ray Kurweil’s jaw-dropping book, The Singularity is Nearer: When We Merge With AI, so you don’t have to. He does literally believe we will be injected with nanobots to create an AI super-cortex above our own neo-cortex, plugged into the cloud and therefore all of humanity’s accumulated intelligence, and thus become super-intelligent with capabilities we can hardly imagine. Among the other possibilities he forsees AI ‘replicants’ (yes, he calls them that) created from the images and texts of deceased loved ones, to restore them to artificial life. The main challenge he forsees will be their exact legal status. The book has a lot of capsule summaries about consciousness, intelligence, how AI works – and also the general ways in which life is getting better, there will be more jobs, and our health and lifespans will improve by leaps and bounds.

Might he be wrong about reaching ‘longevity escape velocity’ and the AI singularity by 2030? A hint of this when he says that book production is so slow that what he has written in 2023 will already be overtaken by events by mid-2024 when we are reading: “AI will likely be much more woven tightly into your daily life.” Hmm. Not sure about that prognostication. Although one of the scariest things about the book is the advance praise from Bill Gates, who writes that the author is: “The best person I know at predicting the future of artificial intelligence.” Do all the Tech Types believe this?

One suspects they believe they’re already more super-intelligent than the rest of us, so what could possibly go wrong?

Screenshot 2024-06-07 at 07.06.50

 

 

AIs as the best of us

Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.

The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).

Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”

As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).

Screenshot 2024-04-03 at 09.46.09

 

AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50

Life, the universe and everything

I’m late to Max Tegmark’s Life 3.0. For all its bestseller status, it didn’t do a lot for me. Probably more to do with me than the book. There’s a large chunk about the distant future and existential risk, which I can’t get interested in. There’s also a lot of physics and evolution, philosophy and cognitive science thrown in to the mix, at a very simplified level. And then there’s the love-in with Elon Musk – including a back cover blurb by the billionaire recently referred to by the Daily Star as a ‘car salesman’. Musk funded Tegmark’s Future of Life Institute. Life 3.0 was published in 2017, pre-Musk’s Twitter takeover and voyage into questionable political stances. But fundamentally, I couldn’t figure out what the book is trying to say, beyond that AI is changing things a lot.

Having said all that, there were some points that interested me. One is the idea of the substrate-independence of computation. Another – one that jumps out from the examples of AI use cases and how they can go wrong, rather than being made explicitly in the book – is that communication between AIs and humans will be fundamentally important to avoiding terrible mistakes. The UX design here is surely as important as any prompt engineering. The third is a section about the reported argument (by David Vladek) that self-driving cars whould be required to have their own car insurance, which will incentivise safety in their design. This raises a question about whether AIs could own property, and when you think about it one could instead require the owners of self-driving cars to take out the insurance. But it’s an interesting throught.

I think Life 3.0 is worth a read nevertheless. (Yuval Noah Harari quite liked it – whatever you make of that.) It ranges widely over the kind of issues societies need to be thinking about as they let AIs operate, and is clearly-written – a good flight or train journey book.

Screenshot 2024-03-10 at 10.17.19