More on Biomimetic AI
In my previous post, Biomimetic Memory for AI: From Philosophy to Robots, I talked about how computer “memory” is not really the same thing as “memory” in a biological system. And representations or “content” might not be as useful or fundamental as you think.
In pursuit of Strong AI—or anything even remotely close to that class—we might want to go back to asking what fundamentally is memory / remembering in a natural intelligent system?
What artificial architecture that mimics nature would work? And does that architecture have to be part of an agent embedded in an environment?
Physical Structural AI

After my previous post, Saty Raghavachary informed me about his very relevant paper “A Physical Structural Perspective of Intelligence.”1Raghavachary, S. (2022). A Physical Structural Perspective of Intelligence. In: Klimov, V.V., Kelley, D.J. (eds) Biologically Inspired Cognitive Architectures 2021. BICA 2021. Studies in Computational Intelligence, vol 1032. Springer, Cham. https://doi.org/10.1007/978-3-030-96993-6_46
He describes AGI (Artificial General Intelligence)—or you could say “Strong AI” here—as historically having three architecture approaches:
- Symbol-oriented
- Connectionist
- Embodiment
Of course, the paper (and this post) are about the third approach. Raghavachary takes a view of intelligence based on physical structure behavior without necessarily depending on digital computation.
The key principle is physical Structures exhibit Phenomena:
S → P
Structures can be assembled in such as way that their component phenomena interact to achieve a specific, “higher level” (possibly surprising, and non-obvious) purpose/functionality. The most instructive and delightful examples of this would have to be the assemblies conceived and sketched by cartoonist Rube Goldberg…
…There is an important realization to make: the entire contraption can be regarded as an analog computer which displays (considers inputs, computes, and responds with) intelligent behavior. In that sense, the mechanism is the computer; in other words, physical structures can be engineered to compute.1Raghavachary, S. (2022). A Physical Structural Perspective of Intelligence. In: Klimov, V.V., Kelley, D.J. (eds) Biologically Inspired Cognitive Architectures 2021. BICA 2021. Studies in Computational Intelligence, vol 1032. Springer, Cham. https://doi.org/10.1007/978-3-030-96993-6_46
S → P also easily describes biological living entities, except they were not custom-built, they evolved. Intelligence is a life process aspect of survival. And it is composed of structures and their phenomena. “Structures manifest intelligence.”
And this natural intelligence exists at every “level” of cognitive architecture or bodily organization.
Which leads to his SPSH (Structured Physical System Hypothesis): “A structured physical system has the necessary and sufficient means for specific intelligent response.”
He provides some SPSH-compatible biomimetic AGI design principles, here are a few:
- Embrainment
- Field computation (“a model of computation that processes information represented as spatially continuous arrangements of continuous data.”2MacLennan, B. J. (1999). Field computation in natural and artificial intelligence. Information Sciences, 119(1-2), 73-89.)
- Analog hardware in addition to digital
- Genetic algorithms for architecture search
- Design for the “Umwelt“
- Homeostasis (I’ve mentioned this in Biomimetic Emotional Learning Agents and All Minds Are Real-Time Control Systems)
- Analyze the affordances (“affordances” being something I mention a lot in this blog, for instance: Mechanisms of Meaning, Walk Softly and Carry an Appropriately Sized Stick)
Distributed Maps

I mentioned in the previous post about Behavioral AI, and how in the 1980s and 1990s it may have been on the brink of the right direction. But the research largely faded out before it got too far. Perhaps those researchers barked up the wrong tree, but were in the right forest.
Barry Werger told me I should have mentioned Maja Mataric’s 1990 paper “Learning a distributed map representation based on navigation behaviors.”3Matarić, M.J., & Brooks, R.A. (1990). Learning a Distributed Map Representation Based on Navigation Behaviors. Proceedings, USA-Japan Symposium on Flexible Automation. And I agree, but I didn’t want to bloat the previous post so I saved it for this one.
Toto the mobile robot was programmed with the subsumption architecture. It had no central representation of the world, just temporary aspects distributed among its many parallel behaviors interacting with the world. But the robot could build a “map” through a kind of passive landmark detection as it wandered: “As the robot explores its environment, individual behaviors come to represent particular parts of the world, or landmarks.”
What’s very cool about the Toto experiment from Mataric (and Rod Brooks) is it dealt with a technical implementation to add insight to arguments still going to this day in philosophy of mind and cognitive science:
- Are representations just a fiction we use to describe what it seems like minds are doing?
- Is there a basic scaffolding of the mind that’s totally non-representational?
- Is it a paradox to for a non-representational architecture to effectively do what we think representations would do?
- Does higher human (or any animal) thought require representations, or something similar that is emergent from a basic scaffolding?