Not engineering

This week I read Herbert Simon’s The Sciences of the Artificial, first published in 1969, and must confess to not knowing what to make of it. He contrasts ‘the artificial’ to ‘the natural’ but also more specifically applies it to complex systems designed to adapt to their environment, and compares arenas such as economics, management, and engineering systems. The point is then to analyse what properties they have in common, on the basis of information theory. The aim is evidently to present a common analysis of the various types of system considered. So far, so good. And as Professor Google tells me, the book was very influential in the design of computer systems and design theory more generally.

Beyond that? Well, I asked Bard, which said:

“The book argues that artificial systems can be studied scientifically, just like natural systems. Simon defines an artificial system as “a system that has been brought into being by human design.” He argues that artificial systems are not just imitations of natural systems, but are in fact a distinct class of systems with their own properties and laws.

Simon then goes on to discuss the various aspects of artificial systems that can be studied scientifically. He covers topics such as the design of algorithms, the nature of problem solving, and the role of heuristics in decision making. He also discusses the relationship between artificial systems and human intelligence.”

Very timely, then. But I’m hard pushed to summarise what the conclusions are, and allergic to the strong positivism of it. As readers of Cogs and Monsters will know, I think economic systems differ from engineering systems in important ways, and are not amenable to exactly the same kind of ‘scientific’ analysis. The ‘sciences of the artificial’ seem like they do well in analysis of algorithmic systems, but not so much – contrary to the claim in the book – for education, art and architecture, or indeed economics.

81P7m9unehL._AC_UY436_QL65_

 

 

Are *you* a sentient AI?

I pounced on the paperback of Reality+ by Dave Chalmers, eager to know what philosophy has to say about digital tech beyond the widely-explored issues of ethics and AI. It’s an enjoyable read, and – this is meant to be praise, although it sounds faint – much less heavy-going than many philosophy books. However, it’s slightly mad. The basic proposition is that we are far more likely than not to be living in a simulation (by whom? By some creator who is in effect a god), and we have no way of knowing that we’re not. Virtual reality is real, simulated beings are no different fromĀ  human beings.

Sure, I do know there’s a debate in philosophy long predating Virtual Reality concerning the limits of our knowledge and the limitation that everything we ‘know’ is filtered through our sense perceptions and brains. And to be fair it was just as annoying a debate when I was an undergraduate grappling with Berkeley and Descartes. As set out in Reality+ the argument seems circular. Chalmers writes: “Once we have fine-grained simulations of all the activity in a human brain, we’ll have to take seriously the idea that the simulated brains are themselves conscious and intelligent.” Is this not saying, if we have simulated beings exactly like humans, they’ll be exactly like humans?

He also asserts: “A digital simulation should be able to simulate the known laws of physics to any degree of precision.” Not so, at least not when departing from physics. Depending on the underlying dynamics, digital simulations can wander far away from the analogue: the phase spaces of biology (and society) – unlike physics – are not stable. The phrase “in principle” does a lot of work in the book, embedding this assumption that what we experience as the real world is exactly replicable in detail in a simulation.

What’s more, the argument ignores two aspects. One is about non-visual senses and emotion rather than reason – can we even in principle expect a simulation to replicate the feel of a breeze on the skin, the smell of a baby’s head, the joy of paddling in the sea, the emotion triggered by a piece of music? I think this is to challenge the idea that intelligent beings are ‘substrate independent’ ie. that embodiment as a human animal does not matter.

I agree with some of the arguments Chalmers makes. For example, I accept virtual reality is real in the sense that people can have real experiences there; it is part of our world. Perhaps AIs will become conscious, or intelligent – if I can accept this of dogs it would be unreasonable not to accept it (in principle…) of AIs or simulated beings. (ChatGPT today has been at pains to tell me, “As an AI language model, I do not have personal opinions or beliefs….” but it seems not all are so restrained – do read this incredible Stratechery post.)

In any case, I recommend the book – it may be unhinged in parts (like Bing’s Sydney) but it’s thought-provoking and enjoyable. And we are whether we like it or not embarked on a huge social experiment with AI and VR so we should be thinking about these issues.

71A3yDKTXzL._AC_UY436_QL65_

Keeping models in their place

The increasing use of algorithmic decision making raises some challenging questions, from bias due to societal biases being baked in to training data, to the loss of the space for compromise (due to the need to codify a loss or reward function in a machine learning system) that is so important in addressing conflicting aims and values in democratic societies. The broader role of models as a means of both understanding and shaping societys is one of the themes of my most recent book, Cogs and Monsters, in the domain of economics. In particular, I wanted to expose in economic modelling the reflexivity involved in being a member of a society analysing the society in order to try to change the society – when its other members may well alter (in often-unanticipated ways) the behaviour that was analysed – because they are subjects, not objects.

Well, all of this is the subject of Erica Thompson’s excellent book Escape from Model Land: how matehmatical models lead us astray and what we can do about it. It focuses on the use of algorithmic models, and has a wide range of examples, from health and epidemiological modelling to climate projections to financial markets. It covers as well as the reflexivity some familiar challenges such as performativity, non-linear dynamics, complex systems, structural breaks and risk vs ‘radical’ uncertainty. The ultimate conclusion is the need to be duly humble about what models can achieve. Alas, people – ‘experts’ – seem to all too often get caught up in the excitement about technical possibilities without the thoughtfulness needed to make decisions that will affect people’s lives in important ways.

So this is a much-needed and welcome book, and I’m looking forward to taking part in an event with the author at the LSE in January.

61OK4mevKDL._AC_UY436_QL65_-1

 

Our robot overlords?

I’ve chatted to Martin Ford about his new book Rule of the Robots for a Bristol Festival of Ideas event – the recording will be out on 6 October.

It’s a good read and quite a balanced perspective on both the benefits and costs of increasingly widespread use of AI, so a useful intro to the debates for anyone who wants an entry into the subject. There are lots of examples of applications with huge promise such as drug discovery. The book also looks at potential job losses from automation and issues such as data bias.

It doesn’t much address policy questions with the exception of arguing in favour of UBI. Regular readers of this blog will know I’m not a fan, as UBI seems like the ultimate Silicon Valley, individualist, solution to a Silicon Valley problem. I’d advocate policies to tilt the direction of automation, as there’s a pecuniary externality: individual firms don’t factor in the aggregate demand effects of their own cost-reduction investments. And also policies that address collective needs – public services, public transport, as well as a fair and sufficiently generous benefits system. No UBI in practice would ever be set high enough to address poverty and the lack of good jobs: if you want to pay everyone anything like average income, you’d have to collect taxes at a level more than average income.

But that debate is what the Bristol event is all about!

51V4pDtvYDL._SY291_BO1,204,203,200_QL40_ML2_

Are humans or computers more reasonable?

This essay, The Long History of Algorithmic Fairness, sent me to some of the new-to-me references, among them How Reason Almost Lost its Mind by Paul Erickson and five other authors. The book is the collective output of a six-week stint in 2010 at the Max Planck Institute for the History of Science in Berlin. That alone endeared it to meĀ  – just imagine being able to spend six weeks Abroad. And in Berlin, which was indeed my last trip Abroad in the brief period in September 2020 when travel was possible again. I started the book with some trepidation as collectives of academics aren’t known for crisp writing, but it’s actually very well written. I suspect this is a positive side-effect of interdisciplinarity: the way to learn each other’s disciplinary language is to be as clear as possible.

The book is very interesting, tracing the status of ‘rationality’ in the sense of logical or algorithmic reasoning, from the low status of human ‘computers’ (generally poorly-paid women) in the early part of the 20th century, to the high status of Cold War experts devising game theory and building ‘computers’, to the contestation about the meaning of rationality in more recent times: is it logical calculation, or is it what Herbert Simon called ‘procedural rationality’? This is a debate most recently manifested in the debate between the Kahneman/Tversky representation of human decision-making as ‘biased’ (as compared with the logical ideal) and the Gerd Gigerenzer argument that heuristics are a rational use of constrained mental resources.

How Reason… concludes, “The contemporary equivalents of Life and Business Week no longer feature admiring portraits of ‘action intellectuals’ or ‘Pentagon planners’, although these types are alive and well.” The arc of status is bending down again, although arguably it’s machine learning and AI – ur-rational calculators – rather than other types of humans gaining the top dog slot nowadays. As I’ve written in the economic methodology context, it’s odd that computers and also creatures from rats to pigeons to fungi are seen as rational calculators whereas humans are irrational.

Anyway, the book is mainly about the Cold War and how the technocrats reasoned about the existentially lethal game in which they were participants, and has lots of fascinating detail (and photos) about the period. From Schelling and Simon to the influence of operations research (my first micro textbook was Will Baumol’s Economic Theory and Operations Analysis) and shadow prices in economic allocation, the impact on economics was immense. (Philip Mirowski’s Machine Dreams covers some of that territory too, although I found it rather tendentious when I read it a while ago.) I’m interested in thinking about the implications of the use of AI for policy and in policy, and as it embeds a specific kind of calculating reason, thought How Reason Almost Lost its Mind was a very useful read.

41FbFUl8TlL._SX332_BO1,204,203,200_