From Philosophy to Robots
What even is “memory?”
It’s easy to describe for computers. But do we really know what it is in biological systems?
And how does it relate to information?
Information itself is a foundational concept for cognitive science theories. In a very basic sense, information is “dumb.” And it’s all over the place, if somebody is there to interpret it. For instance, the rings of a tree carry information about the age of the tree.
But the very definition of information can also cause issues, especially when the term is used to describe the brain “encoding” information from the senses without any regard for types of information and levels of abstraction.
And how does information in our nervous system produce or have meaning?
Daniel D. Hutto (Professor of Philosophical Psychology) has pointed out the entrenched metaphor of memories as items archived in storehouses in our minds. And the dangers of not realizing this is a metaphor.1Hutto, D. D. (2014) “Remembering without Stored Contents: A Philosophical Reflection on Memory,” Memory in the Twenty-First Century. https://www.academia.edu/6799100/Remembering_without_Stored_Contents_A_Philosophical_Reflection_on_Memory
Hutto also points out two important and different notions of information:
- Covariant
- Contentful
Covariance and Content
Information as covariance is one of the most basic ways to define information itself philosophically.
Naturally occurring information, and presumably artificial information, is at least covariance. I.e., information is the relationship between two arbitrary states which change together always (or at least fairly reliably). Like the rings in the tree.
What is content as philosophers use the term? Here we will pretend it means representations. It’s common and easy to describe thoughts as built out of representations of things, be they real or imaginary, since those things are not actually inside a person’s brain, and to the best of our knowledge there is not some ethereal linkage between a real object and a thought of that object.
Perhaps content is built off of covariance. And what we often think of as “remembering” and “recall” are abstractions resting on several lower layers.
Content, at least in the form of representations, is problematic though. Most people doing AI begin with representations and none of those projects has led to a biologically similar intelligence. It’s been argued from the philosophy of mind side that basic minds do not actually have content!2Hutto, Daniel D. and Erik Myin (2012). “Radicalizing Enactivism: Basic Minds without Content., MIT Press.
Behavioral AI
Which leads us to a form of contentless AI that had some moments of popularity back in the 1980s and 1990s, it’s unofficial leader being Rodney Brooks.
– Decision making directly based on inputs
– The idea that intelligent behavior is seen as innately linked to the
environment an agent occupies – intelligent behavior is not
disembodied, but is a product of the interaction the agent maintains
with its environment
– The idea that intelligent behavior emerges from the interaction of
various simpler behaviors3Alexander Kleiner, Bernhard Nebel, “Introduction to Multi-Agent Programming” http://gki.informatik.uni-freiburg.de/teaching/ws0809/map/mas_lect3.pdf
Behavior based AI [Brooks, 1990, Maes, 1990] has questioned the need for modeling intelligent agency using generalized cognitive modules for perception and behavior generation. Behavior based AI has demonstrated successful interactions in unpredictable environments in the mobile robot domain [Brooks, 1985, Brooks, 1990]. This has created a gulf between “traditional” approaches to modeling intelligent agency and behavior based approaches.4Lammens, J.M., Caicedo, G., & Shapiro, S.C. (1993). “Behavior Based AI, Cognitive Processes, and Emergent Behaviors in Autonomous Agents.”

I think if Brooks5Brooks, R. A. (1991). “Intelligence without representation.” Artificial intelligence, 47(1), 139-159. and company (Connell, Flynn, Mataric, Angle and others) had continued their blue sky research (as opposed to applied, etc. research) in reflexive / behavioral robot AI they might have achieved more impressive layered mental architectures with learning capabilities, either by design and/or emergence.
And via those capabilities, the robots could have had emergent internal behavior that could be solidly referred to as “memory,” and not like memory in a computer but like memory in an animal.
Memory in a system grown from behavioral AI would not be like a module dropped in—it would have to be in the nature of the system. And if that unspecific nature would make it difficult to use the term “memory” than so be it.
Maybe the word “memory” should be abandoned for this kind of project since the word recalls “storage” and a host of computational baggage which is very achievable in computers (indeed, at many levels of abstraction) but misleading for making bio-inspired embodied situated creatures. But we also use it for humans and other animals…so here we are.
The big conundrum: We want the robot to have as close as possible zero state at the level of abstraction of the computer program(s) running, but have state somehow emerge from the system acting in an environment.
Stories?
Hutto claims that contentful information is not in a mind—it requires connections to external things1Hutto, D. D. (2014) “Remembering without Stored Contents: A Philosophical Reflection on Memory,” Memory in the Twenty-First Century. https://www.academia.edu/6799100/Remembering_without_Stored_Contents_A_Philosophical_Reflection_on_Memory:
Yet arguably, the contents in question are not recovered from the archives of individual minds rather they are actively constructed as a means of relating and making claims about past happenings. Again, arguably, the ability is not stem purely from our being creatures who have biologically inherited machinery for perceiving and storing informational contents, rather this is a special competence that that comes only through mastery of specific kinds of discursive practices.
As that quote introduces, Hutto furthermore suggests that human narrative abilities, e.g. telling stories, may be part of our developmental program to achieve contentful information. If that’s true then it means my ability to pull up memories is somehow derived from childhood learning of communicating historic and fictional narratives to other humans.
Regardless of whether narrative competence is required, we can certainly explore many ways in which a computational architecture can expand from pre-programmed reflexes to conditioned responses to full-blown human level semantic memories.
It could mean there are different kinds of mechanisms scaffolded on top of each other, as well as scaffolded on externally available interactions, and/or scaffolded on new abstractions such as semantic or pre-semantic symbols composed of essentially basic reflexes and conditioning.
What We Really Want from a Robot with Memory
A requirements approach could break down bio-inspired memory into what behaviors one wants a robot to be capable of regardless of how it is done.
A first stab at a couple parts of such a decomposition might be:
- It must be able to learn, in other words change its internal informational state based on current and previous contexts.
- Certain kinds of environmental patterns that are observed should be able to be reproduced to some degree of accuracy by the effectors.
In number 1, the internal state does not necessarily have to be like nonvolatile computer memory or long term in any sense. It could be emergent from a dynamic system.
In number 2, I mean remembering and recalling “pieces of information.” For example, the seemingly simple act of Human A remembering a name involves many stages of informational coding and translation. Later the recollection of the name involves a host of various computations in which the contexts trigger patterns at various levels of abstraction, resulting in conscious content as well as motor actions such as marking (or speaking) a series of symbols (e.g. letters of the alphabet) that triggers in Human B a fuzzy cloud of patterns that is close enough to the “same” name. Human B would say that Human A “remembered a name.” Or if the human produced the wrong marks, or no marks at all, we might say that the human “forgot a name” or perhaps “never learned it in the first place.”
In terms of accuracy we also have to consider how bad biological systems are. And why that happens. For instance with recalling stories, it can be more of a reconstruction in the telling rather than some kind of verbatim retelling.
These requirements and notes are very incomplete and possibly completely wrong, but hopefully may be a starting point for thinking about how to make AI with biomimetic memory not just computer “memory.”