The people’s AI?

The title of Maximilian Kasy’s new book The Means of Prediction cleverly riffs off Marx’s concept of the means of production for th age of AI. These means, Prof Kasy argues, are data, computational infrastructure, technical skills and energy. The core argument is that these means rest in few hands, so the value being created by AI is concentrated in a relatively small number of technology companies and their founders and employees. What’s more, this concentration also skews the kind of AI systems being created: AI is designed to optimise some objective function, as I argued in Cogs and Monsters (and some recent as-yet-unpublished lectures).

This makes a powerful case for democratising AI, the book argues. Unless countervailing power from workers, consumers, and politicians is brought to bear, the technology will create further inequality and will not serve the public good, only private, profit-maximising interest. Kasy convincingly argues for collective means of control, rather than just individual protections such as GDPR: “Machine learning is never about individual data points, it is always about patterns across individuals.” Control over one’s own personal data does not protect privacy as long as similar people share their data.

The book starts with a section explaining AI, followed by a section explaining how economists understand social welfare – this being the approach (as Cogs explains) that is being automated by AI. These are very clear and useful for people who are hazy about either, although as they are so introductory it did make me wonder about the target audience for the book. Having said that, there is such a lack of knowledge among the public and indeed lots of policymakers and politicians that these sections are probably sorely needed.

The final two sections go on to regulatory challenges and the need for democratising the development and use of AI. As Kasy points out, policy choices have always been choices, with winners and losers, and decisions have often involved predictions; after all this is what policy economists have been doing for decades. The increasing use of AI to make decisions automates and speeds up choices – the danger being that it does so with embedded biases and implicit decisions hidden by the technology, and the all-too-common presumption that the machine must be right.

The optimistic take on where we are is that the use of AI to make predictions and decisions will surface some of the implicit assumptions and biases, and so force more public deliberation about how our societies operate and affect different people. The pessimistic take is of course that they simply become more deeply hidden and entrenched. Depending on my mood, at present I think things could go either way. But that open prospect makes The Means of Prediction a very timely book. And – having pondered who it was aimed at – probably one that every official and politician should be made to read as they chirrup about using AI in public services.

71N92DLDMlL._AC_UY436_QL65_

 

Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − 1 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.