A Way of Looking at Cognition
Whenever you (or a team) come up with a solution for some problem, you probably considered a few other solutions. An observer of the process could potentially imagine a lot more potential alternate solutions, maybe an infinite number.
We could imagine a 2D or 3D (or really any dimensionality) mathematical space in which to place all these different solutions.

Under the premise that pursuing animal-like (and eventually human-like) intelligence can be achieved using cognitive architectures, solution spaces for those architectures might be useful.
Here I’ll introduce one such space and then get into “interfacism.”
Content/Internalism Space
In the following diagram I propose that these two cognitive spectra are of interest and possibly related:
- Content vs. No content
- Internal cognition vs. External cognition
What is “Content”?
I don’t have a very good definition of this, and so far I haven’t seen one from anybody else despite its common usage by philosophers. One aspect (or perhaps container) of content is representation. That is a bit easier to comprehend—an informational structure can represent something in the real world (or represent some other informational structure).
It may seem obvious that humans have representations in their minds, but that is debatable. Some, such as Hutto and Myin, suggest that human minds are primarily without content, and only a few faculties require content1D. D. Hutto and E. Myin, Radicalizing enactivism: basic minds without content. Cambridge, Mass.: MIT Press, 2013.:
Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content. Still, if our analyses are right, a surprising amount of mental life (including some canonical forms of it, such as human visual experience) may well be inherently contentless.
And the primary type of content that Hutto and Myin try to expunge is representational.
It’s worth mentioning that representation can be divorced from the Computational Theory of Mind. Nothing here goes against the mind as computation. If you could pause a brain, you could point to various informational states, which in turn compose structures, and say that those structures are “representations.” But they don’t necessarily mean anything—they don’t have to be semantic.
Another aspect of content is aboutness. “Aboutness” is an easier to use word in place of the philosophical term intentionality; “intentionality” has a different everyday meaning which can cause confusion.2D. C. Dennett, Intuition Pumps and Other Tools for Thinking. W.W. Norton & Company, 2013.
We think about stuff. We talk about stuff. External signs are about stuff. And we all seem to have a lot of overlapping agreements on what stuff means, otherwise we wouldn’t be able to communicate at all and there wouldn’t be any sense of logic in the world.
So does this mean we all have similar representations? Does a stop sign represent something? Is that representation stored in all of our brains, thus we all know what a stop sign means? And what things would we not understand without in-brain representations? And how are these representations grounded?
Consider some sensory stimulus that sets off a chain reaction resulting in a particular behavior that most humans share. Is that internal representation, or are dynamic interfaces something different?
Internal vs. External
This is about the prevailing cognitive science assumption that anything of interest cognitively is neural. Indeed, most would go even further than neural and limit themselves just to the brain.
However, the brain is just one part of your nervous system. Although human brain evolution and development seem to be the cause of our supposed mental advantages over other animals, we should be careful not to discard all the supporting and/or interacting structures.
We might want to consider our insect cousins: “Indeed, a headless insect may survive for days or weeks (until it dies of starvation or dehydration) as long as the neck is sealed to prevent loss of blood!”3NC State University, “Nervous System | ENT 425 – General Entomology” accessed Nov 22, 2022. https://genent.cals.ncsu.edu/bug-bytes/nervous-system/
I’m not saying here that the brain focus is wrong; I’m merely saying that one can have a spectrum.
For instance, a particular ALife experiment could be analyzed from the point of view of anywhere on that axis. Or you could design an ALife situation on any point, e.g. just by focusing on the internal controller that is analogous to a brain (internalist) vs. focusing on the entire system of brain-body-environment (externalist).
Interfacism
Since there has to be an “ism” for everything, there is of course representationalism. Another philosophical stance that is sometimes pitted against representationalism is direct realism.
Direct realism seems to be kind of sloppy. It could simply mean that at some levels of abstraction in the mind, real world objects are experienced as whole objects, not as the various mental middlemen which were involved in constructing the representation of that object. E.g., we don’t see a chair by consciously and painstakingly sorting through various raw sensory data chunks—we have an evolved and developed system for becoming aware of a chair as an object “directly.”
Or, perhaps, in an enactivism or dynamic system sense, one could say that regardless of information processing or representations, real world objects are the primary cause of information patterns that propagate through the system which lead to experience of the object.
My middle ground between direct and indirect realism would, perhaps, be called “interfacism,” which is a form of representationalism that is enactivism-compatible. Perhaps most enactivists already think that way, although I don’t recall seeing any enactivist descriptions of mental representation in terms of interfaces.
What I definitely do not concede is any form of cognitive architecture which requires veridical, aka truthful, accounts anywhere in the mind.
What I do propose is that any concept of an organism can be seen as interactions.
The organism itself is a bunch of cellular interactions, and that blob interacts with other blobs and elements of the environment, some of which may be tools or cognitively-extensive information processors. Whenever you try to look at a particular interaction, there is an interface. Zooming into that interface reveals yet more interfaces, and so on. To say anything is direct, in that sense, is false.
For example, an interfacism description of a human becoming aware of a glass of beer would acknowledge that the human as an animate object and the beer glass as an inanimate object are arbitrary abstractions or slices of reality. At that level in that slice, we can say there is an interface between the human and the glass of beer, presumably involving the mind attributed to the human.
But, if we zoom into the interface, there will be more interfaces.
And so on…
And semantics will probably require links to other things, for instance we don’t just see what is in front of us—we can be primed or biased, or hallucinate, or dream, etc.
How sensory data comes to mean anything at all almost certainly involves evolutionary history and ontogeny (life history) and current brain states at least as much as any immediate perceptual trigger. And our perception is just a contraption of evolution, so we aren’t really seeing true reality ever—it’s a nonsensical concept.
I think interfacism is possibly a good alternate way to look at how cognition, be it wide or narrow—at any given cognitive granularity, there is no direct connection between two “nodes” or objects. There is just an interface, and anything “direct” is at a level below, recursively. It’s also compatible with non-truthful representations and/or perception.
Some might say that representations have to be truthful or that there are representations, for instance in animal behaviors, because there is some truthful mapping between the real world and the behavior. With an interface point of view we can throw truth out the window. Mappings can be arbitrary. There may be consistent and/or accurate mappings. But they don’t necessarily have to be truthful in any sense aside from that.