And the award for hypiest headline this week goes to:
First, it didn’t “imagine” itself in any normal use of that word which could also imply a whole lot of other cognitive faculties. And even in the biological sense, we wouldn’t consider this “imagination.” I don’t completely blame the headline writer though, because one of the professors they quoted also said “how the robot imagined itself.”
But what is it?
It’s self-modeling. Which can be done directly by programmers for instance using forward kinematics possibly in combination with bounding surfaces as they do in video games (indeed I’ve coded that myself for a robot arm once).
But here the developers set it up so that it’s automatic—it goes through a scenario to learn this model on its own using a visual technique1Chen, Boyuan et al. “Full-Body Visual Self-Modeling of Robot Morphologies.” Science robotics 7 68 (2022): eabn1944 .. And it’s not dependent on pre-programmed forward kinematics.
Here, we propose that instead of directly modeling forward kinematics, a more useful form of self-modeling is one that could answer space occupancy queries, conditioned on the robot’s state.
Second, it’s not the first robot to do this. In 2006 this starfish robot project at Cornell, developed by Viktor Zykov, Josh Bongard and Hod Lipson, could automatically create a self model, learn to walk and then use self-modelling to adjust behavior after damage.
It begins by building a series of computer models of how its parts might be arranged, at first just putting them together in random arrangements. Then it develops commands it might send to its motors to test the models. A key step, the researchers said, is that it selects the commands most likely to produce different results depending on which model is correct. It executes the commands and revises its models based on the results. It repeats this cycle 15 times, then attempts to move forward.
The robot has several models competing for the best candidate to explain its recent experience. Then they do an experiment of an injury such as removing part of a leg. The robot goes through another 16 iterations to update its models resulting in a new best candidate so it can still operate.
The researchers limited the robot to 16 test cycles with space exploration in mind. “You don’t want a robot on Mars thrashing around in the sand too much and possibly causing more damage,” Bongard explained.
And there’s been other “resilient robot” research projects since 2006 including some that are self-reconfigurable.2Zhang, Tan et al. “Resilient Robots: Concept, Review, and Future Directions.” Robotics 6 (2017): 22. https://pdfs.semanticscholar.org/1a4e/2e07882ff53cd6765f381c77966c0c005311.pdf
All this prior work is not listed to disparage the Columbia project, just to point out for the innocent readers of pop sci articles that not everything is “the first” and you should be especially suspicious if it also says they used neural nets since researchers can take any old thing from even decades ago and redo with neural nets / deep learning and the press eats it up.
- 1Chen, Boyuan et al. “Full-Body Visual Self-Modeling of Robot Morphologies.” Science robotics 7 68 (2022): eabn1944 .
- 2Zhang, Tan et al. “Resilient Robots: Concept, Review, and Future Directions.” Robotics 6 (2017): 22. https://pdfs.semanticscholar.org/1a4e/2e07882ff53cd6765f381c77966c0c005311.pdf