A Different Approach to AI
Back in 2004, when I was an undergrad at Northeastern University in Boston, I started designing a cognitive architecture called Biomimetic Emotional Learning Agents (BELA).
It was a grand plan and I’ve barely cracked the surface of it since then. The overreaching nature of the project was noted by a reviewer of my rejected extended abstract who told me my ideas were θ-baked, where θ ≤ 0.5.
The architecture concept was focused on scaffolding up from low-level emotional architectures like homeostasis, reactions, knowledge acquisition and learning.
And by learning, I meant in the common way of using the word “learning” and the biological methods behind that, not computer science kinds of “learning” such as ML techniques.
What resulted was not too much code but:
- The consideration of Phylo vs. Onto
- The notion of “Growing Pains”
- A large list of mental concepts, which you could say are the requirements of such a system
Phylo vs. Onto
Phylogenetic space is the evolutionary space. It’s orthogonal to the ontogenetic space—the individual lifetime.
Onto space includes not just learning and experience, but also initial development—how does a particular creature grow a mind, how does it learn how to learn?
There’s a third space connected to both of these in biology which may or may not matter much for artificial creatures—epigenetics: “Epigenetics is the study of how the environment and other factors can change the way that genes are expressed. While epigenetic changes do not alter the sequence of a person’s genetic code, they can play an important role in development.”1https://www.psychologytoday.com/us/basics/epigenetics
Growing Pains
“Growing pains” is a training scenario for learning in the onto space.
It would throw an artificial organism into increasingly more dangerous/complex sandboxes. They would have to master each sandbox, starting with the easiest, before going on to the next one.
And then at some point we might say they are at “adult” level. And maybe this adult level is the point when it’s transferred to a real world robot out of a simulation, or maybe it’s when a robot is ready for its largest subdomain of the real world.
The Requirements
This is the list of mental concepts that a BELA project would need to implement or experiment with:
- reactions
- instincts
- “gut feelings”
- homeostasis (some have suggested “homeodynamics”)
- emotions
- “background” emotions
- “primary” emotions
- “secondary” emotions
- pain
- fear
- phobias
- pleasure
- excitement/appetitive/dopamine vs. tranquility/consumatory/opioid
- rewards
- learning in real-time
- associative learning
- emotional learning
- learning new methods of learning
- change/growth in the learning mechanism itself
- concepts without experience
- conditioning, predispositions
- phylogenetic vs. ontogenetic learning/training
- self-preservation configuration from nurture vs. nature
- “growing pains” training
- emotional-associative knowledge bases, emotional maps
- ontology KBs
- operating domains
- fuzzy logic—many comparisons will be of ranges not single values
- knowledge through association
- configuration parameters for every aspect of the framework
- loaded from data config files
- parallel processes/modules/layers
- ENS-style [semi]-autonomous agencies
- arbitration
- goal definition/descriptions
- soft goals—emergent behavior from configuration
- hard goals—minimize difference between current state and a stored described state
- feelings of emotions
- moods
- modes of thinking / “frames of mind” / alternate problem-solving approaches
- knowledge-lines and switching to previous configurations
- self-history and forgetfulness
- fall-back configurations for input overload or lack of input
- tricking with form instead of content
- priming
- plasticity / adaptability of agent when it is damaged
- probability of survival of a particular class of agent in a particular domain
- immune system
- stress responses
- “biological modes”
- arbitration rules for action subsumption between modes?
- autonomic nervous system
- Sympathetic Nervous System—“fight or flight” capabilities
- instant reconfiguration of agent viscera
- stock survival behaviors
- Parasympathetic Nervous System—“rest and digest”
- exhaustion, sleep
- even robots have to recharge their batteries
- correlate/cull/compress memories?
- crisis situations that require suppression of SNS stress responses?
- Sympathetic Nervous System—“fight or flight” capabilities
- noise
- false memories
- misinformation
- probability of wrong inferences
- capability to lie
- intentionality
- guessing intentions of objects/agents
- theory-of-mind
- mindreading
- mindblindness
- self-model, self-awareness, “proto-self”
- critic/skeptic analyzers, checks-and-balances
- attention, salient objects/agents
- exploiting the environment/situation
- with constraints, e.g. nondestructive
- utilizing the environment-technology situation to extend computational/problem-solving abilities
Remarks
From the point of view of cognitive science, some of these concepts may be the same things or overlap. Most are probably connected somehow. For an artificial framework, some may turn out to be unnecessary.
Of course, there is no Shake-n-Bake implementation—one doesn’t just make a module / agent for each one of those concepts, throw them in a bag, and get something that works as expected or at all.
At least…I don’t think you would.