In What Ways Could An A.I. Be Conscious? — Making Up Minds

Gabriel Murray
4 min readAug 12, 2019

If we want to speculate about whether an A.I. system could be conscious, we first need to specify which type of consciousness we mean. There is more than one type, and it seems clear that an A.I. could have some forms of consciousness, but perhaps some types that it could not have as well. I’ll start with the types of consciousness that it seems an A.I. could have.

Access Consciousness: Following Ned Block, we’ll say that an agent (biological or artificial) is access conscious of a thing if it can do all of the following:
- Reason about that thing.
- Speak about that thing.
- Act on that thing.
Consider a conversational agent that can learn about a person’s preferences and purchase things on the person’s behalf. That agent might be access conscious of a fact such as “Ottawa is the capital of Canada.” It can reason about the fact, e.g. if it knows the person wants a flight to the capital of Canada, then it also knows that the person wants a flight to Ottawa. As a conversational agent, it could speak with the person and answer a question about the capital of Canada. And it could act as well, by purchasing a flight to Ottawa when it is asked to purchase a flight to the capital of Canada. Given our definition of access consciousness, I think we’d have to say that an artificial agent could be access conscious.

Monitoring Consciousness: An agent has monitoring consciousness if it is capable of monitoring its own internal states. Computer programs certainly have the capability for some forms of monitoring consciousness, e.g. a program can keep track of how many times a function has been called or how many times a variable’s value has changed. An artificial agent can keep track of whether or not a goal has been achieved. So we would want to say that an artificial agent has the capability of monitoring consciousness.

Self Consciousness: An agent has self consciousness if it has the concept of a self and can apply that concept to itself. Consider an artificial agent in a multi-agent scenario. It needs to have the concept of an agent, and to recognize that it is collaborating with some agents that have the same goals and competing with some agents that have opposing goals, and that it itself is an agent with goals. I would call this a type of self consciousness.

Phenomenal Consciousness: This type of consciousness relates to inner experience, “subjective feels,” or what are sometimes called qualia. For example, the experience of smelling toast, or the experience of tasting cinnamon. In Thomas Nagel’s famous paper What Is It Like To Be A Bat? (PDF warning), he described this type of consciousness: “But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism- something it is like for the organism. We may call this the subjective character of experience.” Following Nagel, Ian Ravenscroft defines phenomenal consciousness as follows:

“For any phenomenally conscious experience E there is something that it is like to have E.”

There is a lot that is mysterious about phenomenal consciousness. It is not at all clear how it arises in humans, and trying to explain this mystery has famously been dubbed the hard problem of consciousness. Because we don’t know how subjective experience arises, it is extremely speculative and as yet unfounded to claim that an artificial agent could have this type of consciousness. Is there something that it is like to be a chatbot? I am doubtful. For one thing, being embodied may be a key aspect of our human subjective experience. We will have a lot more to say about phenomenal consciousness in future posts.

Looking at the definitions of consciousness above, it seems that the word consciousness is doing a lot of heavy lifting and perhaps being spread too thin. Notice that the first three definitions — the ones that we said an A.I. would plausibly have — are all defined in terms of abilities or skills: information processing skills, or the ability to have and apply a concept. Those three definitions are also extrinsic in some sense: we could look at an artificial agent’s behaviour and its implementation and determine whether it has those abilities and concepts. Phenomenal consciousness is completely different: it is intrinsic, and it is experiential rather than relating to abilities or skills. We cannot verify whether or not it is present in any agent other than ourselves. I’d prefer that we reserve the word consciousness for phenomenal consciousness and use other terms for the other types of consciousness described above. And in that case, I’d say that it’s very doubtful that an A.I. could be conscious.

What Is It Like?

Originally published at http://makingupminds.com on August 12, 2019.

--

--