The Neats Strike Back
THOT: LOT
Previously I described a widespread THOT (That Hypothesis Over There) known as LOT—Language Of Thought. This famously goes back to the philosopher Fodor in the 1970s but that lies in a lineage of the hypothesis going back much further and involving various disciplines.
LOT covers a huge range of approaches to model the mind and/or create AI. Basically it’s about symbol manipulation, but not necessarily at the level of human spoken/written language. That of course leaves a lot of area for theorizing and implementing what these representations are in the mind and what is the machinery “running” to make use of them.
Scruffies vs. Neats
Some may have thought LOT was long abandoned, especially since it’s associated with the Scruffies. It’s trendy to act as if the Scruffies have lost the war with the Neats. The Neats are connectionists with neural networks being the basis of most of their tools. You may have never even heard of LOT despite that being a basic premise, historically, of a lot of AI research.
Recently, some researchers published in Behavioral and Brain Sciences proposing that LOT is still “the best game in town.” And that’s what prompted my previous post about LOT.
Not So Fast
Apparently not everybody in the brain sciences and philosophy of mind agree. In a recent post “Why Neuroscience Refutes the Language of Thought,” Gualtiero Piccinini challenges LOT.
It might be unfair to group Piccinini in the Neats, especially since he’s so critical of artificial neural nets as a model of anything biological as well:
the required machinery goes way beyond the simplistic digital interpretation of neural networks that McCulloch and Pitts (1943) proposed, which was a gross simplification and idealization of real neural networks and which no self-respecting neuroscientist considers at all relevant to understanding real neural networks
But for now I will say this is a Neats play anyway since he doesn’t seem to be interested in more than one level of abstraction—outside of the levels of organization observable in the brain—or cognitive / psychological architectures or virtual machines:
Setting aside the specialized neural representations that are possibly involved in explaining human linguistic and mathematical cognition (which might approximate some aspects of discreteness at a coarse level of granularity), there is no evidence of a genuinely digital code in the brain, or of a computer-like programming language being executed within the brain, let alone digital processors including the special components that are needed for processors to work
Piccinini seems to think there has to be a digital computer running in the brain—a wetware processor of sorts that is directly analogous to silicon chips—for LOT to work.
Piccinini has another battle going on, however. As I mentioned in my post, “Trends in Analog and Neural Computation,” he challenges the premise of digital. Here as part of his argument against LOT, he says:
neural computation is not digital and that, a fortiori, the brain is not a digital computer
And that could undermine the current Neats with their beloved neural nets that run completely on digital substrates.
Personally I suspect we haven’t even come close to exhausting purely architectural psychological exploration regardless of underlying substrates. But if Piccinini is right, maybe both the Scruffies and the Neats will be faded out in favor of some new factions.