A main underlying philosophy of artificial intelligence and cognitive science is that cognition is computation.
“Cognition is computation” leads to the notion of symbols within the mind.
This is not new, people have been debating it for many decades.
There are many paths to explore how the mind works. One might start from the “bottom,” as is the case with neuroscience or connectionist AI. So you can avoid symbols at first. But once you start poking around the “middle” and “top,” symbols abound.
You might be thinking to yourself this sounds out of date—aren’t we set on using connectionist / statistical methods now? Won’t everything involve deep neural nets?
Whoa now! Hold your horses bucko—we are not even close to settling any of this for Strong AI or Philosophy of Mind.
Besides the metaphor of top-down vs. bottom-up, there is also the crude summary of Logical vs. Probabilistic. Some people have made theories that they think could work at all levels, starting with the connectionist basement and moving all the way up to the tower of human language.
This quote from Paul Smolensky (cognitive scientist and Optimality Theory co-creator) summarizes some aspects of the problem1Dr. Paul Smolensky – Bio:
Precise theories of higher cognitive domains like language and reasoning rely crucially on complex symbolic rule systems like those of grammar and logic. According to traditional cognitive science and artificial intelligence, such symbolic systems are the very essence of higher intelligence. Yet intelligence resides in the brain, where computation appears to be numerical, not symbolic; parallel, not serial; quite distributed, not as highly localized as in symbolic systems. Furthermore, when observed carefully, much of human behavior is remarkably sensitive to the detailed statistical properties of experience; hard-edged rule systems seem ill-equipped to handle these subtleties.
Structures
Now, when it comes to theorizing, I’m not interested in getting stuck in the wild goose chase for the One True Primitive or Formula.
I’m interested in cognitive architectures that may include any number of different methodologies.
And those different approaches don’t necessarily require mapping to totally different architectures / systems. It could be that some of the different approaches are just different ways of looking at the same architecture or [sub]system. Even if you have a wonderful statistical model based on neurobiology, there still is likely value in symbolic and/or structural explanatory levels. Indeed it’s questionable how much introspection and explanations we can get from both models of human minds and in AI systems that are purely connectionist / statistical.
I think that a symbol could in fact be an arbitrary structure, for example an object in a semantic network which has certain attributes. The sort of symbols one uses in everyday living come in to play when one structure is used to represent another structure. Or, perhaps instead of limiting ourselves to “represent” I should just say “provides an interface.”
Interfaces
One would expect that a good interface to produce a symbol would be a simplifying interface. As an analogy, you use symbols on computer systems all the time. One touch of a button on a cell phone activates thousands of lines of code, which may in turn activate other programs and so on. You don’t need to understand how any of the code works, or how any of the hardware running the code works. The symbols provide a simple way to access something complicated.
A system of simple symbols that can be easily combined into new forms also enables wonderful things like language. And the ability to set up signs for representation (semiosis) is perhaps a partial window into how the mind works.
There’s an old but influential book called Society of Mind by Marvin Minsky2M. Minsky, Society of Mind, Simon & Schuster, 1986. which is full of theories of these structures that might exist in the information flows of the mind. However, Society of Mind attempts to describe most structures as agents. An agent is isn’t merely a structure being passed around, but is also actively processing information itself.
Is There a Language of Thought?
Symbols are also important when one is considering if there is a language of thought, and what that might be. As Minsky wrote:
Language builds things in our minds. Yet words themselves can’t be the substance of our thoughts. They have no meanings by themselves; they’re only special sorts of marks or sounds…we must discard the usual view that words denote, or represent, or designate; instead, their function is control: each word makes various agents change what various other agents do.
Or, as Douglas Hofstadter puts it3D. Hofstadter, Metamagical Themas, Basic Books, 1985.:
Formal tokens such as ‘I’ or “hamburger” are in themselves empty. They do not denote. Nor can they be made to denote in the full, rich, intuitive sense of the term by having them obey some rules.
Throughout the history of AI people have made attempts at intelligent programs and chosen some atomic object type to use for symbols…I imagine sometimes something arbitrary like what’s intrinsic to the programming language they were using.
But simple symbol manipulation doesn’t result in in human-like understanding.
Hofstadter, at least in the 1970s and 80s, said that symbols have to be “active” in order to be useful for real understanding. “Active symbols” are actually agencies which have the emergent property of symbols. They are decomposable, and their constituent agents are quite stupid compared to the type of cognitive information the symbols are taking part in.
Hofstadter compares these symbols to teams of ants that pass information between teams which no single ant is aware of. And then there can be hyperteams and hyperhyperteams…