Skip to main content

Generalizable Scientific Theories of Machine Consciousness

Venue: Birkbeck Main Building, MAL 151

This event has ended.

In philosophy and popular culture there has been a great deal of speculation about the consciousness of machines and the transfer of consciousness from people to machines. The first part of this talk will distinguish four different types of machine consciousness: (MC1) Machines with the same external behaviour as conscious systems. (MC2) Models of the correlates of consciousness. (MC3) Models of phenomenal consciousness. (MC4) Machines that have phenomenal experiences that are similar to our spatially and temporally distributed experiences of colour, smell, taste, etc. Examples will be given of systems that fit into each of these categories.

The next part of the talk will discuss whether we can build MC4 conscious machines. Many people have addressed this question by using their intuition and imagination to decide whether a machine is conscious. If a robot looks like a human and behaves in a similar way to a conscious human, then we are inclined to attribute consciousness to it. MC4 machine consciousness will be more convincingly solved when we have developed scientific theories that can make detailed predictions about consciousness in any physical system.

Most previous work on consciousness has focused on the neural correlates of consciousness in humans and similar animals. The limitation of this work is that it cannot be generalized to artificial systems that work in a different way from the human brain. To move beyond neural correlates and discover generalizable theories of consciousness we have to solve four difficult problems. First, we need to reach agreement about how consciousness can be measured and abandon the idea that consciousness can be measured through a machine’s external behaviour. Next we need to move away from neural correlates of consciousness and find new ways of describing spatiotemporal patterns in the brain that could form the basis for generalizable theories of consciousness. Third, we need less anthropomorphic ways of describing consciousness. Finally we have to drop our desire for intuitively satisfying explanations of consciousness and search for mathematical relationships between formal descriptions of consciousness and formal descriptions of the physical world. When these problems have been solved we will be able to use human and animal experiments to discover general mathematical theories of consciousness that can make believable predictions about MC4 consciousness in artificial systems.


David Gamez is a lecturer in computer science at Middlesex University, with expertise in philosophy, artificial intelligence and neuroscience. His latest book, Human and Machine Consciousness, came out this year (, and his previous publications include What We Can Never Know (2007), What Philosophy Is (2004), co-edited with Havi Carel, and many papers and book chapters on philosophy, artificial intelligence and neuroscience. A complete list of Gamez’s talks and publications is available on his website:

Contact name: