Back in 2012 I was working on some ideas and a paper to present my version of biologically-inspired development. Not just as a single project or as a technique, but as an abstraction level.
It’s hard to explain, so let me first digress with this: The agent approach to AI became a mainstream part of AI in the 1990s, and one of the most popular AI textbooks of the 2000 – 2010 era framed all of AI in the context of agents. Certainly within many actual projects, one can refer to the agent layer of abstraction.
An abstraction in computer science hides the details that are “below” or “inside”. We encapsulate and black box things. It’s easy to see how straightforward this is with computers, where the abstractions rise from transistors in the electronics domain up to machine language up to languages like C and then on top of that an application which in turn has an interpreted scripting language. Each layer saves the user/programmer from having to deal with the nitty gritty of lower levels on a daily basis (although they sometimes rear their ugly heads).
And yet when we get to the concept of agent, I feel like the abstraction stack is in new territory. And this feeling gets more weird with development, at least my version of development. And by “development” I’m not talking about software development per se but the kind of development that biological organisms undergo as neonates and children.
Mentioning agents was not just to talk about abstractions getting fuzzy, but also because it’s one of those AI abstractions that never seemed to reach the glorious potential some may have hoped for. There are many abstractions and techniques in cognitive science and AI. A lot of people swear by certain specific abstractions, for instance those neuropsychologists and similar-minded folks who think neurons and their networks are the layer on which we should understand the human mind. AI people have their own obsessions with abstractions and/or models, like ANNs and HMMs, or more recently, DNNs and transformers.
Let me ease you into my version of development: First, consider the agent as an artificial organism, possibly virtual, possibly existing in the real world as a robot. Now imagine that it has an embryo stage (embryogeny) where it actually grows from some small version (analogous to a biological zygote) and into the first child stage form. For robots that would be hard to make. But it’s not impossible—and it’s easy in a virtual world. Also, we are recording all of this data, including what goes on inside the artificial organism’s mind (or whatever prototypical information patterns it has that will eventually become a mind).
Next this organism begins various phases of “childhood”. Again, physical body changes may occur. If we want to be like biology, we also keep the mind in synchronization with the body changes. Of course, that is one of the interesting experimental areas when using this development abstraction—how the mind changes structures and content in a feedback loop with the body. And we are still recording all this data.
Now we also include a special agent in the mix—another artificial organism which is the analogue of a caregiver. This is a special training mechanism. The role of the parent in the early years of animal babies, especially humans, involves not just keeping a baby safe and teaching it some things, but also establishing basic mental structures via interaction.
Also, during these phases, the default framework for this development abstraction would include the notion of environment changes. A typical pattern would be to start with a small relatively safe environment, and then gradually increase the complexity and/or danger as the artificial organism develops and learns.
So these phases go on until adulthood. And depending on the experiment, the adult artificial organism is unleashed onto the world, where “world” is whatever adult environment the researchers have chosen.
Data was recorded the whole time—both of the environment and of whatever we can record of the mental experiences of the organisms. Unlike biology we literally have special access as researchers into the minds of our digital creatures. All that data we recorded can be used to do cool and weird experiments, like going “backwards in time” to replay a phase, and even replay it with some variables changed.
So that’s generally it—for the lifetime part of it (aka ontogeny). I haven’t even mentioned anything about the evolutionary timeline.
Ecological Development Abstraction for AI
My short paper, “An Ecological Development Abstraction for Artificial Intelligence,” was featured in the AAAI (Association for the Advancement of Artificial Intelligence) symposium “How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or _____?” and published in the technical report for the AAAI 2013 Fall Symposium Series. The symposia occurred November 15-17 in Arlington, Virginia.1Samuel H. Kenyon (2013). An Ecological Development Abstraction for Artificial Intelligence. Papers from the 2013 AAAI Fall Symposium
That’s also where I met Gary Marcus. I also met Jeff Clune, who’s been doing AI research—especially in combination with evolutionary algorithms—for a long time…but he was skeptical of my poster and my stance on the need for symbol grounding.
This is my abstract:
A biologically inspired AI abstraction based on phylogenetic and ontogenetic development is proposed. The key factor in the abstraction is iterative development without breaking existing functionality. This approach involves design and observation of systems comprised of artificial organisms situated in environments. An ontogenetic framework involving development and learning stages is discussed. There are critical relationships between iterative biological development and iterative software development. Designing artificial organisms will require a balance of architecture design and design of systems that automatically design other systems. New taxonomies might enable comparisons and sharing of artificial organism design and development.
And here is my poster for it:
- 1Samuel H. Kenyon (2013). An Ecological Development Abstraction for Artificial Intelligence. Papers from the 2013 AAAI Fall Symposium