I read a proof of The Unaccountability Machine by Dan Davies with a view to blurbing it, and was more than happy to recommend it. This is a fascinating book. The subtitle indicates its scope: “Why Big Systems Make Terrible Decisions and How the World Lost Its Mind”. The book asks why mistakes and crises never seem to be anybody’s fault – it’s always ‘the system’. Davies uses the concept of the ‘accountability sink’ – a policy or set of rules that prevent individuals from making or changing decisions and thus being accountable for them. He writes: “For an accountability sink to function, it has to break a link; it has to stop feedback from the person affected by the decision from affecting the operation of the system. The decision has to be fully determined by the policy, which means that it cannot be affected by any information that wasn’t anticipated.” I predict that the more machine learning automates decisions, the more accountability sinks we will experience. Think Horizon. But there are plenty of non-automated examples. Davies cites, for example, Gill Kernick’s wonderful book on the Grenfell disaster (and others), Catastrophe and Systemic Change.

The book draws heavily on Stafford Beer’s cybernetics, providing the public service of digesting all of his writings and making them accessible. Cybernetics was of course concerned with using the flow of information and enabling feedback. Decisions about how to make decisions are part of the system. Hence the often-quoted principle that “the purpose of a system is what it does” – and not what it says it does. The book has several chapters describing how systems operate, including how to conceptualise a ‘system’ in the complex, messy real world. Davies observes that this requires a representation that is “both rigorous and representative of reality.” The selection of categories and relationships in a system is a property of the choices about description and classification made by the analyst rather than inherent reality. He describes – using plentiful examples – how systems so often malfunction.

The book has a chapter specifically diagnosing the strengths but also malfunctions of economics. He writes: “Economics has been a major engine of information attenuation for the contrl system. Adopting the economic mode of thinking reduces the cognitive demands placed on our ruling classes by telling them there are a lot of things they don’t have to bother thinking about. … when decisions are made that have disastrous long-term conseqneuces as a result of relatively trivial short-term cash savings, the pathology is often directly related to something that seemed like a good idea to an economist.”  There’s an interesting section on ‘markets as computing fabric’, a ‘magic calculating machine’. This was echoed recently in some terrific posts by Henry Farrell and Cosma Shalizi. It’s a fruitful way of thinking about collective economic outcomes. I also strongly agree with the sections about collecting the data – classification and data collection is a super-power (as I’ve been writing for years in connection with GDP and beyond). The book says, “Numbers are collected for a purpose and it’s often surpriginly hard to use them for any other purpose.” Moreover, many numbers are not collected, which makes it hard to ‘prove’ claims about the potential for the system to operate differently.

The book ends by returning to system dysfunction – ‘morbidity’. From the toxic idea of shareholder value maximisation to the fentanyl crisis in the US, from the collapse of public infrastructure networks to the advers effects of private equity (which Brett Christophers has dissected forensically in his book), economic and financial systems need a redseign. Davies suggests one step that he thinks would have a big impact: make these investors liable for company debts. Oh, and make sure the economists are not in charge: “Every decision-making system set up as a maximiser needs to have a higher-level system watching over it.”

The Unaccountability Machine does not directly address my current preoccupation, which is the implications for automated decision-making in public services, in particular, of GOF machine learning and generative AI, but is higly relevant to it. It’s a cracking read and I highly recommend it.

Screenshot 2024-03-26 at 09.45.36