The people’s AI?

The title of Maximilian Kasy’s new book The Means of Prediction cleverly riffs off Marx’s concept of the means of production for th age of AI. These means, Prof Kasy argues, are data, computational infrastructure, technical skills and energy. The core argument is that these means rest in few hands, so the value being created by AI is concentrated in a relatively small number of technology companies and their founders and employees. What’s more, this concentration also skews the kind of AI systems being created: AI is designed to optimise some objective function, as I argued in Cogs and Monsters (and some recent as-yet-unpublished lectures).

This makes a powerful case for democratising AI, the book argues. Unless countervailing power from workers, consumers, and politicians is brought to bear, the technology will create further inequality and will not serve the public good, only private, profit-maximising interest. Kasy convincingly argues for collective means of control, rather than just individual protections such as GDPR: “Machine learning is never about individual data points, it is always about patterns across individuals.” Control over one’s own personal data does not protect privacy as long as similar people share their data.

The book starts with a section explaining AI, followed by a section explaining how economists understand social welfare – this being the approach (as Cogs explains) that is being automated by AI. These are very clear and useful for people who are hazy about either, although as they are so introductory it did make me wonder about the target audience for the book. Having said that, there is such a lack of knowledge among the public and indeed lots of policymakers and politicians that these sections are probably sorely needed.

The final two sections go on to regulatory challenges and the need for democratising the development and use of AI. As Kasy points out, policy choices have always been choices, with winners and losers, and decisions have often involved predictions; after all this is what policy economists have been doing for decades. The increasing use of AI to make decisions automates and speeds up choices – the danger being that it does so with embedded biases and implicit decisions hidden by the technology, and the all-too-common presumption that the machine must be right.

The optimistic take on where we are is that the use of AI to make predictions and decisions will surface some of the implicit assumptions and biases, and so force more public deliberation about how our societies operate and affect different people. The pessimistic take is of course that they simply become more deeply hidden and entrenched. Depending on my mood, at present I think things could go either way. But that open prospect makes The Means of Prediction a very timely book. And – having pondered who it was aimed at – probably one that every official and politician should be made to read as they chirrup about using AI in public services.

71N92DLDMlL._AC_UY436_QL65_

 

When a nearly 30-year old textbook is still the best on the digital economy

There’s a gap in the market. I’ve started developing a Digital/AI economics syllabus for policy MPhil students who won’t all be economics graduates; and of course there are loads of fantastic papers and resources online.

It’s fairly straightforward to write down a list of topics – recognising that one can’t cover anything.Here are my headings for an 8 week Cambridge term – so there’s nothing on crypto for example, as I don’t know enough about it (& am still pretty sceptical). This is the first half – the second 8-week term will cover competition, regulation and trade.

  1. Basics: Information and the economy, why digital is different from the old economy
  1. Implications for organisations: Business models, platforms, ecosystems; auctions
  1. Intangible assets: IP, copyright, data (& privacy), measurement
  1. Inputs: finance, energy and materials, chips
  1. Innovation: clusters and types, direction, policies
  1. The demand side: technology diffusions, ‘free’ services, household production, open source
  1. Impact on labour: task models, AI & jobs
  1. Policies for the digital/AI economy: skills/jobs, industrial policy, digital public infrastructure, ‘the stack’ & sovereignty,

The structure might well shift as I get into details. But naturally I wondered whether somebody had done much of the work for me and written a textbook covering a lot of this. And it seems the answer is no: nobody has done an update of Carl Shapiro and Hal Varian’s fantastic 1998 Information Rules. I just re-read it and of course the examples have dated but the principles have not. The brilliant Daniel Rock told me (via BlueSky) that he still teaches using it.

I wrote Markets, State and People a few years ago as a policy economics textbook because the ones available were unsatisfactory for the course I was teaching then – none took the perspective of the basic co-ordination problems policy seeks to tackle. So maybe it’s me. But I still think there is a gap in the market for a new Information Rules – somebody could surely do this in time for its 30th anniversary?

61atJCARUwL._AC_UY327_QL65_

 

Leviathan, supersized

My dear husband gave me a Daunts book subscription for my birthday so I get a lucky dip new paperback each month. A recent one was my colleague David Runciman’s The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, first published in 2023. As David writes too many books for me to keep up with, I hadn’t already read it. The core argument is that human societies have already ceded many decision-making powers to non-human entities, namely states and corporations.

I read most of the book thinking, ‘Yes, but….’, as it’s a neat argument but not watertight. It starts with Hobbes, and the idea of non-human persons as it developed in different institutional forms. A key difference with decisions made by machine agents seems to lie in their autonomy or lack of openness to change or redress; and changing that requires them to be part of states and corporations rather than separate entities.

The book does, though, sort of acknowledge this towards the end: “If the machine decides what happens next, no matter how intelligent the process by which that choice was arrived at, the possibility of catastrophe is real, because some decisions need direct human input. It is only human beings whose intelligence is attuned to the risk of having asked the wrong question.” He goes on to link this back to the claim that the state is a ‘political machine’ or ‘artificial decision-making machine’ so there is no difference really between states and AIs – but this, again, makes the use of AIs in political domains part of the state machine.

He concludes: “For now the bigger choices is whether the artificial agency of the state is joined with human intelligence or artificial intelligence.” Will AI crowd out the humanity? Looking at the US now, this seems like a question from another era, a gentler era, though. The new regime there has merged state, corporation and AI in a behemoth that dwarves Hobbes’ Leviathan.

717KaBX5liL._AC_UY436_QL65_

 

Robots among us

I ended up with mixed reactions to Waiting for Robots: The Hired Hands of Automation by Antonio Caselli.

The powerful point it makes is the complete dependence of AI and digital technologies generally on ongoing human input. Many years ago, my husband – then a technology reporter for the BBC – was digging out the facts about a hyped dot com company called Spinvox. Its business was said not be automated voice transcription, but it turned out the work was mainly done by humans, not computers (although the story turned scratchy –  the linked post responds to the company’s points). Waiting for Robots gives many examples of apps that similarly involve cheap human labour rather than digital magic – I was surprised by this. Less surprising – and indeed covered in other books such as Madhumiat Murgia’s recent Code Dependent – is the use of humans in content moderation (remember when big social media companies used to do that?), data labelling and other services from Mechanical Turk to reinforcement learning with human feedback for LLMs.

The book also claims much more as ‘labour’ and this is where I disagree. Of course big tech benefits from my digital exhaust and from content I post online such as cute dog photos. But this seems to me categorically different from often (badly) paid employment relationships. Although the stickiness of network effects or habit might keep me on a certain service, although the companies might set the defaults so they hoover up my activity data, the power dynamics are different. I can switch, for instance from X to BlueSky, or from Amazon to my local bookstore. So I’m not a fan of portraying these types of data-provision as more ‘digital labour’.

Having said that, the book makes a compelling case that robots and humans are interdependent and will remain so. Generative AI will continue to need human-produced material (‘data’) and intervention to avert model collapse. Humans are also going to have to pay for digital services so will need to have money to pay with. Focusing on the economic dynamics involved is crucial, as it is clear that the market/platform/ecosystem structures are currently tilted towards the (owners of) robots and away from humans. So, for all that I’m not persuaded by the classification of different types of ‘digital labour’ here (and find the anti-capitalist perspective on tackling the challenges unpragmatic apart from anything else), there is a lot of food for thought in Waiting for Robots.

61o8lw6oQAL._AC_UY218_

The welcome application of good sense to AI hype

Summer over in a flash, autumn wind and rain outside – perhaps cosy evenings will speed up both my reading and review-posting.

I just finished AI Snake Oil by Arvind Narayanan and Sayash Kapoor, having long been a fan of the blog of the same name. The book is a really useful guide through the current hype. It distinguishes 3 kinds of AI: generative, predictive and content moderation AI – an interesting categorisation.

On the generative AI so much in the air since ChatGPT was launched in late 2022, and the persuasive debunking here is of the idea that we are anywhere close to ‘general’ machine intelligence, and of the notion that such models pose existential risks. The authors are far more concerned with the risks associated with the use of predictive AI in decision-making. These chapters provide an overview of the dangers: from data bias to model fragility or overfitting to the broad observation that social phenomena are fundamentally more complicated than any model can predict. As Professor Kevin Fong said in his evidence to the Covid inquiry last week, “There is more to know than you can count.” An important message in these times of excessive belief in the power of data.

The section on the challenges of content moderation were particularly interesting to me, as I’ve not thought much about it. The book argues that content moderation AI is no silver bullet to tackle the harms related to social media – in the authors’ view it is impossible to remove human judgement about context and appropriateness. They would like social media companies to spend far more on humans and on setting up redress mechanisms for when the automated moderation makes the wrong call: people currently have no recourse. They also point out that social media is set up with an internal AI conflict: content moderation algorithms are moderating the content the platform’s recommendation algorithms are recommending. The latter have the upper hand because it doesn’t involve delicate judgements about content, only tracking the behaviour of platform users to amplify popular posts.

There have been a lot of new books about AI this year, and I’ve read many good ones. AI Snake Oil joins the stack: it’s well-informed, clear and persuasive – the cool breeze of knowledge and good sense are a good antidote to anybody inclined to believe the hyped claims and fears.

Screenshot 2024-09-29 at 17.31.57