The Flattening of AI

Much in the same way that our eyes refresh our view through continual movement,
the means to rise up and upend our thinking are found in this tumbling of relations.
Unflattening,
we remind ourselves of what it is to open our eyes to the world for the first time.1Sousanis, N. Unflattening. Harvard University Press, 2015.

There has been a flattening of AI. But first, let’s rewind a few seconds…

Artificial Intelligence was originally about Strong AI, starting back in the 1950s and even before that under various other names.

Strong AI essentially means human-level intelligence.

And this was a massive mountain to tackle.

Achieving human-level artificial intelligence has turned out to be an extremely difficult task—much like climbing an unconquered peak

At first the going was rather easy. Several preliminary pitches were scaled without too much difficulty.

It wasn’t long, however, before commentators pointed to difficulties ahead

So, the climbers re-grouped. They improved much of their climbing gear and developed some new gear and techniques—Lisp machines, Bayes networks, sophisticated search strategies, Monte Carlo methods, Walksat, default logics, POMDPs, hidden Markov models, reinforcement learning, genetic programming, and support-vector machines, among others. Indeed, many of these methods were so powerful and useful that even more climbers abandoned the climb and detoured into green valleys to use their expertise on problems in biology, business, and defense—problems that didn’t have very much to do with summitting.2Nilsson, N. J. Routes to the Summit. AI@50 Dartmouth Conference, 2006.

The summit has never been reached. But is this just some dream of old fogeys?

Some might call the right methods GOFAI—Good Old-Fashioned Artificial Intelligence. But GOFAI usually specifies symbolic-only approaches.

Strong AI involves a wide variety of thinking. Lots of different approaches. But it’s been largely squeezed out of the field of AI.

The field of Artificial Intelligence has flattened.

Much Ado About Something

Thousands (millions?) of people ranging from developers to scientists are doing lots of AI work. At some point Google alone had thousands of deep learning projects with an army of dutiful engineers cranking away on them. But it’s flat. It’s shallow. And it’s narrow.

Argument: But deep learning is one of the most popular sub-fields around for the past several years and keeps getting better and better! How could that be “flat”—it’s got “deep” right in the name!?

Well no, sorry, it’s only deep relative the structural history of neural nets, and typically it has big data. It doesn’t have much to do with being deep in any cognitive or semantic sense unless you do some handwaving and cartwheels to attempt to prove that enough data and statistics somehow equals biologically-similar semantic understanding in a machine.

And ML (Machine Learning), which is the subfield in which deep learning resides, is all narrow. In fact they don’t have much to do with most kinds of biological learning that are interesting to Strong AI. ML is great as an applied tool in all kinds of places. But it’s narrow. And flat.

Some call this “Weak AI” to make it an antithesis to Strong AI but that seems a bit insulting…unless somebody is trying to tell you their Generative Pre-trained Transformer 3 (GPT-3)3Sumrak, J. What Is GPT-3: How It Works and Why You Should Care. Twilio Blog, 2020. https://www.twilio.com/blog/what-is-gpt-3 deep learning architecture will spontaneously gain sentience—in that case call it Weak AI to their face.

That’s not to say I’m choosing sides in the old battle from decades ago of symbolic logic vs. connectionism. (Deep learning would be in the connectionist side). I’m just saying that any flat/narrow algorithm/structure is not in itself going to lead to Strong AI at all. And this is supported moreso if you buy into the approaches that claim human-like AI requires a dynamic system with an environment and a physical body.

And don’t get me wrong, deep learning and ML are very useful. The tech is pretty exciting, especially where increasingly more-powerful GPUs have been put to use for it. But for Strong AI, I think at best they will be small components if anything at all. However, maybe some kind of statistical approaches will prove to be illuminating for creating minds. We’ll see.

Imagine before the Wright Brothers got planes to fly, somebody said, “well my gasoline powered internal combustion engine packs quite a punch—all we need to make a flying machine is to just keep stacking up a bunch of combustion engines.”

Not Dead Yet

So…AI flattened and Strong AI kind of disappeared. But threads still exist. People are interested in it, maybe attempting to work on it here and there. I don’t have any data as to how many though. Some of the goals still continue in the small communities of AGI, Cognitive Architectures and Bio-Inspired systems.

Meanwhile, in the discipline of philosophy of mind, philosophers are coming up with lots of good stuff that’s relevant to cognitive science—and to Strong AI…if anybody in AI would listen.

The 4E cognition approaches (in philosophy and cognitive science) are particularly interesting, where 4E means Embodied, Ecological (or Embedded), Extended and Enactive:

Revolution is, yet again, in the air. This time it has come in the wake of avant-garde Enactive or Embodied approached to cognition that bid us to reform our thinking about the basic nature of mind.

The most radical versions of these approaches are marked by their uncompromising and thoroughgoing rejection of intellectualism about the basic nature of mind, abandoning the idea that all mentality involves or implies content.4Hutto, D.D & Myin, E. Radicalizing Enactivism: Basic Minds without Content. MIT Press, 2012.

4E should also help with the Symbol Grounding Problem5Kenyon, S. AI Don’t Know Jack? The Symbol Grounding Problem. MetaDevo, 2021. https://metadevo.com/ai-dont-know-jack/ of AI as well.

AI and robotics used to overlap with early forms of 4E philosophy, namely in the 1980s and 90s, for example non-representational behavioral mobile robots. But it seems like that’s mostly faded away, so now the disciplines are siloed a lot more. There are some out there trying to do 4E AI, but not many.

But the Old Ways Didn’t Work

Physics has made impressive progress in 350 years, but it is not yet “done”.

Only 60 years after starting to apply computational modeling methods to the Problem of the Mind, it is not surprising that we are far from “done”.6Kuipers, B. Progress in AI, 2019. https://web.eecs.umich.edu/~kuipers/opinions/AI-progress.html

Argument 1: But as a society we had to flatten AI! It was the only way out of the last AI Winter!7Milton, L. History of AI Winters. Actuaries Digital, 2018. https://www.actuaries.digital/2018/09/05/history-of-ai-winters/

Argument 2: We had to flatten AI because Strong AI theories were wrong!

Maybe we did in order to get culture and technology involved and the money flowing.

But I do not have any reason to believe that Strong AI is dead, even if some theories were wrong, and some theories were never fully implemented so we don’t even know. Failure does not necessarily mean a theory was “wrong.”8Kenyon, S. Failure Does Not Necessarily Mean a Theory is Wrong. MetaDevo, 2021. https://metadevo.com/failure-does-not-necessarily-mean-a-theory-is-wrong/ On the other hand, “Just because a theory is old doesn’t mean it’s correct.”9Brooks, R. Cognition Without Computation. IEEE Spectrum, 2021. https://spectrum.ieee.org/computational-cognitive-science

Some have proposed we need to go non-computational to achieve artificial cognition9Brooks, R. Cognition Without Computation. IEEE Spectrum, 2021. https://spectrum.ieee.org/computational-cognitive-science. In my opinion computational approaches are not dead yet—although I’m a sympathizer for using them in combination with dynamic systems and 4E approaches.