THOT (That Hypothesis Over There)

Language of Thought—It’s Back!

Since the dawn of Artificial Intelligence in the 1950s there have been roughly two approaches:

  1. Symbolic
  2. Connectionist

In the 1970s the two camps, somewhat mutated, started to be called the Scruffies vs. the Neats.1Wikipedia contributors. (2021, December 22). Neats and scruffies. In Wikipedia, The Free Encyclopedia. Retrieved 14:05, December 24, 2022, from https://en.wikipedia.org/w/index.php?title=Neats_and_scruffies&oldid=1061595295

These categories are not fair, of course—there’s a lot more types, but this dichotomy persists.

Artificial neural networks—spanning from 1951 to the deep learning models of today—belong to Connectionism and the Neats as does most of the field of ML (Machine Learning).

Symbolic AI—sometimes called “classical” despite that neural nets started around the same time—gets to claim symbol-based, logic-based, rule-based and semantic approaches as well as modular systems akin to what is nowadays typical software architecture. Originally logic-based was part of what became the Scruffies but then got put into the Neats in the 1970s. For instance, early AI researcher John McCarthy was a big Neat formal logic AI guy but he also helped start the Scruffy labs and came up with a programming language they used for many years (LISP).

Another way to put it is that #2 is largely statistical whereas #1 is not. And #2 is more formal…in some ways. And #2 is very focused on “learning” but to make it more confusing #1 often is as well, just with different approaches and definitions of “learning.”

Personally I’m more interested in a third type, or perhaps it would be a meta-type, called Cognitive Architectures which is not bound to either Symbolic or Connectionist exclusively. Although, as one of the descendants of GOFAI (Good Old Fashioned AI) and Strong AI, Cognitive Architectures might be more associated with Symbolic AI. Definitely in league with Scruffies. The other descendant—the vaporous AGI (Artificial General Intelligence) that emerged two decades ago—seems more attached to Connectionism and the Neats.

As I mentioned a previous post, the LOT (Language of Thought) hypothesis is associated with the “classical” Symbolic category.

Language of Thought

LOT is the hypothesis that there’s a “single medium in which all cognition proceeds.”2Dennett, D.C. (1991). Consciousness Explained. Boston (Little, Brown and Co) 1991.

However, this “language” is not a natural human language, for instance the English that I’m writing in—it’s a deeper internal symbolic system that underpins all our cognitive abilities including speaking and writing.

Apparently LOT, sometimes known as mentalese, goes all the way back to 1323.3Rescorla, Michael, “The Language of Thought Hypothesis”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/language-thought/>. It made a spectacular return in 1975 via the philosopher Jerry Fodor who was interested in “speculative psychology.”4Fodor, Jerry A., 1975, The Language of Thought, New York: Thomas Y. Crowell.

I think philosopher Daniel Dennett’s analogy2Dennett, D.C. (1991). Consciousness Explained. Boston (Little, Brown and Co) 1991. with programming and virtual machines is a good attempt to explain it: Think of our spoken, written languages as high-level computer programming languages. Python, Javascript, stuff like that. Then the LOT would be the low level machine language that actually “runs” in the computer’s processor after all those layers of interpretation and compilation.

But perhaps Dennett’s virtual machines analogy is even better since LOT might not be as low level as machine language. Spoken / written language could be a high-level virtual machine. And we could think of any number of intermediate virtual machines in a stack or nested. And somewhere below the levels that are doing linguistics but above the lowest neural levels is the realm of LOT.

Fodor once made the point with the aid of an amusing confession: he acknowledged that when he was thinking his hardest, the only sort of linguistic items he was conscious of were snatches along the lines of “C’mon, Jerry, you can do it!”2Dennett, D.C. (1991). Consciousness Explained. Boston (Little, Brown and Co) 1991.

Re-Emergence

A new paper, “The Best Game in Town: The Re-Emergence of the Language of Thought Hypothesis Across the Cognitive Sciences,”5Quilty-Dunn, J., Porot, N., & Mandelbaum, E. (2022). The Best Game in Town: The Re-Emergence of the Language of Thought Hypothesis Across the Cognitive Sciences. Behavioral and Brain Sciences, 1-55. doi:10.1017/S0140525X22002849 suggests that LOT is not dead at all and is still supported by evidence from numerous sub-fields of psychology.

We grant that the mind may harbor many formats and architectures, including iconic and associative structures as well as deep-neural-network-like architectures. However, as computational/representational approaches to the mind continue to advance, classical compositional symbolic structures—i.e., LoTs—only prove more flexible and well-supported over time.

The authors reconsider LOT as representational and having six properties, the first of which is “Discrete constituents”: the representation comprises distinct separable parts instead of one interwined blob representation. Another property is “Role-filler independence” which is like Mad Libs. Also LOT symbols should be able to represent abstract content. The rest of the properties are kind of technical and boring logical details.

One thing they argue is that state of the art DNNs (deep neural nets), although very successful in certain ways, do not necessarily model or explain human cognitive abilities.

It is not a coincidence, in our view, that DNNs that succeed at image classification exhibit little to no competence in these domains. As Peters and Kriegeskorte write about feedforward DCNNs, “the representations in these models remain tethered to the input and lack any concept of an object. They represent things as stuff”

They claim abstract object representations using evidence from infant and animal studies which indicates one-shot category learning.

The infants’ one-shot category learning outperformed DCNNs trained on millions of labeled images. This divergence between DCNN and human performance echoes independent evidence that DCNNs fail to encode human-like transformation-invariant object representations

Well it’s compelling stuff. But it’s going to have to go up against the Neats. And against the non-representational subset of the Scruffies, the latest gang being the radical enactivists.6Wikipedia contributors. (2022, December 21). Enactivism. In Wikipedia, The Free Encyclopedia. Retrieved 16:38, December 24, 2022, from https://en.wikipedia.org/w/index.php?title=Enactivism&oldid=1128705934


  • 1
    Wikipedia contributors. (2021, December 22). Neats and scruffies. In Wikipedia, The Free Encyclopedia. Retrieved 14:05, December 24, 2022, from https://en.wikipedia.org/w/index.php?title=Neats_and_scruffies&oldid=1061595295
  • 2
    Dennett, D.C. (1991). Consciousness Explained. Boston (Little, Brown and Co) 1991.
  • 3
    Rescorla, Michael, “The Language of Thought Hypothesis”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/language-thought/>.
  • 4
    Fodor, Jerry A., 1975, The Language of Thought, New York: Thomas Y. Crowell.
  • 5
    Quilty-Dunn, J., Porot, N., & Mandelbaum, E. (2022). The Best Game in Town: The Re-Emergence of the Language of Thought Hypothesis Across the Cognitive Sciences. Behavioral and Brain Sciences, 1-55. doi:10.1017/S0140525X22002849
  • 6
    Wikipedia contributors. (2022, December 21). Enactivism. In Wikipedia, The Free Encyclopedia. Retrieved 16:38, December 24, 2022, from https://en.wikipedia.org/w/index.php?title=Enactivism&oldid=1128705934

Subscribe to my newsletter!