The Need for Emotional Experience Does Not Prevent Conscious Artificial Intelligence

I’ve mentioned the book The First Idea1Greenspan, S.I. & Shanker, S.G, The First Idea: How Symbols, Language, and Intelligence Evolved from Our Primate Ancestors to Modern Humans, Da Capo Press, 2004. by Greenspan and Shanker many times (Is Human-Level AI Possible Without Childhood Emotions?, Infants, Monkeys, Love and AI, Are Emotional Structures the Foundation of Intelligence?). Lest anybody assume I am a fanboy of that tome, I wanted to argue with a ridiculous statement that the authors make in regards to consciousness and artificial intelligence.

Greenspan and Shanker make it quite clear that they don’t think AI can have consciousness:

What is the necessary foundation for consciousness? Can computers be programmed to have it or any types of truly reflective intelligence? The answer is NO! Consciousness depends on affective experience (i.e. the experience of one’s own emotional patterns). True affects and their near infinite variations can only arise from living biological systems and the developmental processes that we have been discussing.

Let’s look at that logically. The first part of their argument is Consciousness (C) depends on Affective experience (A):

C → A

Even if that sentence is sound, Conscious AI can still be made since a machine can have Affective experience, at least that’s my stance.

But the next part states Affective experience (A) depends on Biological systems (B) and Development processes (D).

A → B ∧ D

Which means that Consciousness is, in turn, dependent on Biological systems and Development processes:

C → B ∧ D

That’s bad because it entails that if a system is not biological or if the system doesn’t have proper development processes, that system cannot be conscious:

¬B ∨ ¬D → ¬C

So the authors dramatically answer major questions of Strong AI and AGI in the negative with a deux ex machina of “true affects.”

But the authors give no evidence or rationale for “true affects.” Why would anybody assume that only living biological systems can experience emotional patterns? (More on defining emotions.)

Biological organisms have set the bar for emotions. But that doesn’t mean biological-level emotions and the experience of their patterns can’t be replicated in other substrates.

And as far as Development goes, that’s also a concept which can be implemented in an artificial system.

The authors spend most of a book talking about affect and development and how symbols are formed, and then full stop emergency brake when they get to the possibility of replicating those very architectural and dynamic system concepts they spent so much time trying to explain.

For those who think I may have taken this out of context, if they meant typical disembodied computers in comparison to embodied computers (e.g. robots), they would say so, and they don’t. Simulated bodies are not considered either.

In conclusion, I still know no reasons that would prevent making conscious Strong AIs.


  • 1
    Greenspan, S.I. & Shanker, S.G, The First Idea: How Symbols, Language, and Intelligence Evolved from Our Primate Ancestors to Modern Humans, Da Capo Press, 2004.

Support my work.

Sam Kenyon Written by: