Are *you* a sentient AI?

I pounced on the paperback of Reality+ by Dave Chalmers, eager to know what philosophy has to say about digital tech beyond the widely-explored issues of ethics and AI. It’s an enjoyable read, and – this is meant to be praise, although it sounds faint – much less heavy-going than many philosophy books. However, it’s slightly mad. The basic proposition is that we are far more likely than not to be living in a simulation (by whom? By some creator who is in effect a god), and we have no way of knowing that we’re not. Virtual reality is real, simulated beings are no different from  human beings.

Sure, I do know there’s a debate in philosophy long predating Virtual Reality concerning the limits of our knowledge and the limitation that everything we ‘know’ is filtered through our sense perceptions and brains. And to be fair it was just as annoying a debate when I was an undergraduate grappling with Berkeley and Descartes. As set out in Reality+ the argument seems circular. Chalmers writes: “Once we have fine-grained simulations of all the activity in a human brain, we’ll have to take seriously the idea that the simulated brains are themselves conscious and intelligent.” Is this not saying, if we have simulated beings exactly like humans, they’ll be exactly like humans?

He also asserts: “A digital simulation should be able to simulate the known laws of physics to any degree of precision.” Not so, at least not when departing from physics. Depending on the underlying dynamics, digital simulations can wander far away from the analogue: the phase spaces of biology (and society) – unlike physics – are not stable. The phrase “in principle” does a lot of work in the book, embedding this assumption that what we experience as the real world is exactly replicable in detail in a simulation.

What’s more, the argument ignores two aspects. One is about non-visual senses and emotion rather than reason – can we even in principle expect a simulation to replicate the feel of a breeze on the skin, the smell of a baby’s head, the joy of paddling in the sea, the emotion triggered by a piece of music? I think this is to challenge the idea that intelligent beings are ‘substrate independent’ ie. that embodiment as a human animal does not matter.

I agree with some of the arguments Chalmers makes. For example, I accept virtual reality is real in the sense that people can have real experiences there; it is part of our world. Perhaps AIs will become conscious, or intelligent – if I can accept this of dogs it would be unreasonable not to accept it (in principle…) of AIs or simulated beings. (ChatGPT today has been at pains to tell me, “As an AI language model, I do not have personal opinions or beliefs….” but it seems not all are so restrained – do read this incredible Stratechery post.)

In any case, I recommend the book – it may be unhinged in parts (like Bing’s Sydney) but it’s thought-provoking and enjoyable. And we are whether we like it or not embarked on a huge social experiment with AI and VR so we should be thinking about these issues.


3 thoughts on “Are *you* a sentient AI?

  1. Great review, poking in all the right places while identifying the key points! But you write “the argument ignores two aspects”—the first one is “about non-visual senses and emotion rather than reason”… but what’s the second one? Am I missing something?

  2. “A digital simulation should be able to simulate the known laws of physics to any degree of precision.”

    The laws of physics are not deterministic, they are probabilistic. See Heisenberg Uncertainty Principle.

Comments are closed.