HomeTechnologyHow to Tell if Your A.I. is Conscious

How to Tell if Your A.I. is Conscious

Have you ever had a conversation with someone who is interested in consciousness? How did that conversation go? Did they gesture vaguely with both hands? Did they mention the Tao Te Ching or Jean-Paul Sartre? Did they suggest that scientists can’t be certain about anything and that reality is only as real as we perceive it to be?

The study of consciousness has been considered taboo in the natural sciences due to its lack of precision. In the past, philosophers were primarily responsible for tackling this concept, although they were often not much better at defining it than anyone else. Hod Lipson, a roboticist from Columbia University, mentioned that some individuals in his field referred to consciousness as “the C-word.” Grace Lindsay, a neuroscientist at New York University, noted that there used to be an idea that consciousness couldn’t be studied until after obtaining tenure.

However, a recent report composed by a group of philosophers, neuroscientists, and computer scientists, including Dr. Lindsay, proposed a framework for determining whether an artificial intelligence (AI) system like ChatGPT could be considered conscious. This report, which explores the emerging field of consciousness science, combines elements from several nascent empirical theories and presents a list of measurable qualities that might indicate the existence of consciousness in a machine.

One theory, known as recurrent processing theory, focuses on the distinction between conscious perception (actively observing an apple) and unconscious perception (such as perceiving an apple flying toward your face without consciously processing it). Neuroscientists argue that unconscious perception occurs when electrical signals pass from the nerves in our eyes to the primary visual cortex and then to deeper parts of the brain. In contrast, conscious perception arises when these signals are sent back to the primary visual cortex, creating a loop of activity.

Another theory proposes the existence of specialized brain regions responsible for specific tasks. For example, the part of your brain that helps you balance on a pogo stick differs from the part that allows you to appreciate a vast landscape. Although we can integrate this information to some extent, doing so can be challenging. Therefore, neuroscientists suggest the presence of a “global workspace” that facilitates control and coordination over what we pay attention to, remember, and perceive. Consciousness may arise from this integrated and dynamic workspace.

Consciousness could also stem from the ability to be self-aware, create virtual models of the world, predict future experiences, and perceive one’s body in space. According to the report, any of these features could potentially be essential aspects of consciousness. If we can identify these traits in a machine, we might consider the machine to possess consciousness.

One challenge with this approach is that advanced AI systems primarily consist of deep neural networks that “learn” autonomously in ways that are not always interpretable by humans. While we can gather some information about their internal structure, our understanding remains limited at present. This is the “black box problem” of AI. Thus, even with a comprehensive rubric for consciousness, applying it to the machines we use daily would be difficult.

The authors of the report emphasize that their proposed list is not definitive in determining consciousness. They rely on a perspective called “computational functionalism,” which reduces consciousness to the exchange of information within a system, akin to a pinball machine. According to this viewpoint, in theory, a pinball machine could be conscious if it became significantly more complex (although it might cease to be a pinball machine in the process). However, others argue that consciousness relies on biological or physical characteristics and social or cultural contexts, which cannot easily be coded into a machine.

Additionally, even for researchers who support computational functionalism, no existing theory seems adequate in comprehensively addressing consciousness. According to Dr. Lindsay, “For any of the conclusions of the report to be meaningful, the theories have to be correct. Which they’re not.” These limitations indicate that our current understanding of consciousness is incomplete.

Ultimately, is it possible for any combination of these features to capture the essence of conscious experience, described by William James as the “warmth,” or as Thomas Nagel put it, “what it is like” to be oneself? There is a gap between our ability to measure subjective experience through scientific means and the actual experience itself. This gap represents what David Chalmers termed the “hard problem” of consciousness. Even if an AI system displays recurrent processing, a global workspace, and a sense of physical location, it may still lack the subjective quality that makes it feel like something.

When I raised this issue with Robert Long, a philosopher at the Center for A.I. Safety and the leader of the report, he acknowledged that “That feeling is kind of a thing that happens whenever you try to scientifically explain or reduce high-level concepts to physical processes.”

The stakes are high, as advancements in AI and machine learning outpace our ability to explain their inner workings. In 2022, an engineer at Google claimed that the company’s LaMDA chatbot was conscious (although most experts disagreed). As generative AI becomes more integrated into our lives, discussions on this topic may become increasingly contentious. Dr. Long argues that we need to start making claims about what might possess consciousness, while criticizing the vague and sensationalist approach often taken in this field. “This is an issue we face right now and over the next few years,” he stated.

As Megan Peters, a neuroscientist from the University of California, Irvine, and an author of the report, said, “Whether there’s somebody in there or not makes a big difference on how we treat it.”

We already conduct similar research with animals, employing careful investigation to make basic assertions about whether other species have experiences similar to our own or that are comprehensible to us. This process can be likened to a fun house activity, involving shooting empirical arrows at shape-shifting targets from moving platforms with bows that occasionally turn out to be made of spaghetti. However, sometimes we achieve success. As Peter Godfrey-Smith wrote in “Metazoa,” cephalopods likely possess a robust subjective experience that differs categorically from humans. Octopuses, for example, have approximately 40 million neurons in each arm. What is that like?

To solve the problem of other minds, we rely on a series of observations, inferences, and experiments, both organized and improvised. We engage in conversation, touch, play, hypothesize, prod, control, X-ray, and dissect. Yet, ultimately, we still do not fully understand what makes us conscious. We only know that we are.