Mechanisms of Meaning

Towards Real Semantics for Strong AI

I talked about “symbol grounding” before in “AI Don’t Know Jack?” but there’s always more to say.

As a reminder, they are not necessarily symbols in the old fashioned computer sense—perhaps a better term than symbol grounding is concept grounding1Rabchevskiy, M. (2021). AGI: STRUCTURING THE OBSERVABLE. https://agieng.substack.com/p/agi-structuring-the-observable.

Whatever you call it, if one’s goal is to create human-level or at least animal-like Strong AI, it would have to be grounded so that it understands like we do.

It has to have internal meaning. The meaning has to be at least as independent as our meaning is, i.e. instead of most AI today which depend on the injection of meaning from humans. A Strong AI approach cannot cheat in this way—unless a legitimate way is found to circumvent the problem or explain things such that it’s not a problem at all.

Argument: Grounding is impossible in computers.

That’s barking up the wrong tree of John Searle’s Chinese Room Argument (CRA), which could be summarized as this question:

How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?2Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

The core is a thought experiment:

Thus Searle’s rule book describes a procedure which, if carried out accurately, allows him to participate in an exchange of uninterpreted symbols – squiggles and squoggles – which, to an outside observer, look as though Searle is accurately responding in Chinese to questions in Chinese about stories in Chinese; in other words, it appears as if Searle, in following the rule book, actually understands Chinese, even though Searle trenchantly continues to insist that he does not understand a word of the language.3Nasuto, S.J., Bishop, J.M., Roesch, E.B., & Spencer, M.C. (2015). Zombie Mouse in a Chinese Room. Philosophy & Technology, 28, 209-223.

The Chinese Room is Dead?

Well not quite, since we don’t have understanding machines yet, at least not that I know of. And CRA continues to poke holes in the computational theory of mind, which is probably a good thing.

But in my opinion the CRA does not prove that all artificial intelligence can not achieve this even if most AI instances fall into its traps.

The CRA mentality doesn’t comprehend the difference between written code and a program running. It doesn’t comprehend the differences between static syntactics versus dynamic real-time systems that may “do” those syntax operations.

It also doesn’t comprehend the ability of systems to create novel layers. It would be like saying an Internet cannot be built artificially because transistors themselves do not have any Internet.

And it pretends that self organizing systems don’t exist…but they do. Otherwise a human body would be impossible—a person made of meat? From DNA? Absurd! There’s no “body” in DNA. How could DNA turn into animated meat? Hogwash!

“No brain?”

“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”

“So … what does the thinking?”

“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”

“Thinking meat! You’re asking me to believe in thinking meat!”4Bisson, T. (1991). They’re Made Out of Meat. OMNI. http://www.terrybisson.com/theyre-made-out-of-meat-2/

I think it’s absurd to give special properties to meat machines compared to non-meat machines. But real semantics is not an easy thing to implement.

But it seems apparent that the problem of connecting up with the world in the right way is virtually coextensive with the problem of cognition itself.2Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Some researchers have concluded that CRA is correct to the degree that current cognitive robotic attempts are failing and the computational theory of mind behind it all is not sufficient—they propose leaning towards enactive cognitive science and an:

enlarged perspective that includes the closed-loop interactions of a life-regulated body-brain dynamical system with an evolving world.3Nasuto, S.J., Bishop, J.M., Roesch, E.B., & Spencer, M.C. (2015). Zombie Mouse in a Chinese Room. Philosophy & Technology, 28, 209-223.

Architecture

There’s still a lot more I’m planning to discuss about concept grounding, sensorimotor skills, exorcising representations and related philosophical battles.

But on another thread:

What higher mechanisms of meaning might there be in a Strong AI cognitive architecture?

These would start lower and build up to higher levels, or in some cases might be a level on top of other lower level mechanisms.

Here’s a few ideas. I will be writing about all of these in the near future.

  1. Emotions
  2. Affordances
  3. Metaphors and Analogies

Why Those?

Emotions may span from very low levels all the way “up” and across different time / latency zones, so it’s not just one thing in this context. At least one theory, described in the book The First Idea by Stanley Greenspan and Stuart Shanker, involves emotions during development as a core part of forming concepts. 

An affordance is “what a user can do with an object based on the user’s capabilities:”5https://www.interaction-design.org/literature/topics/affordances

As such, an affordance is not a “property” of an object (like a physical object or a User Interface). Instead, an affordance is defined in the relation between the user and the object: A door affords opening if you can reach the handle. For a toddler, the door does not afford opening if she cannot reach the handle.

Affordances might be a powerful descriptive and design tool for some of this that bridges non-language animals and humans.

In the famous book Metaphors We Live By by George Lakoff and Mark Johnson it was proposed that “metaphor is integral, not peripheral to language and understanding.”6https://medhum.med.nyu.edu/view/1064 In what perhaps subsumes the metaphors theory, Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences proposes analogies as the primary generator of mental concepts and categories7Hofstadter D.R. & Sander E., Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, Basic Books, 2013.. Both of these seem like obvious candidates for consideration in cognitive architectures.