Heterarchies and Society of Mind’s Origins

One of the key ideas of Society of Mind1M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986. is that at some range of abstraction levels, your wetware is a bunch of asynchronous agents. Agents are simple—but a certain organization of them results in what we call “mind.”

The book Society of Mind includes many sub-theories of how agents might work, structures for connecting them together, memory, etc. It was inspired in part by child psychology research of Jean Piaget.

Although Minsky mentions some of the development points that lead to the book, he makes no explicit references to old papers. The book is copyrighted “1985, 1986.” Rewind back to 1979—in the book Artificial Intelligence: An MIT Perspective, there is a chapter by Minsky called “The Society Theory of Thinking.” In a note, Minsky summarizes it as2P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.:

Papert and I try to combine methods from developmental, dynamic, and cognitive psychological theories with ideas from Artificial Intelligence and computational theories. Freud and Piaget play important roles.

Ok, that shouldn’t be a surprise if you read the Society of Mind book. But there’s another predecessor called heterarchies.

In 1971 Patrick Winston described heterarchical organization as3P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.:

An interacting community of processes, some narrow experts, others broad generalists, and still others in the role of critics.

“Heterarchy” is a term that many attribute to Warren McCulloch in 1945 based on his neural research. Although it may have been abandoned in AI, the concept had some success in anthropology. It is important to note that a heterarchy can be viewed as a parent class to heirarchies and a heterarchy can contain hierarchies.

In 1973 the student Eugene Freuder, who later became well known for constraint based reasoning, reported on his “active knowledge” for vision thesis, called SEER4E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973. http://dspace.mit.edu/bitstream/handle/1721.1/41086/AI_WP_053.pdf?sequence=4. In one of the funniest papers I’ve read, Freuder warns us that:

this paper will probably vacillate between cryptic and incoherent.

Nevertheless, it is healthy to write things down periodically. Good luck.

And later on that:

SEER never demands that much be done, it just makes a lot of helpful suggestion. A good boss.

This basic structure is not too hairy, I hope.

If you like hair, however, there are enough hooks here to open up a wig salon.

He refers to earlier heterarchy uses in the AI Lab, but says that they are isolated hacks, whereas his project is more properly a system designed to be a heterachy which allows any number of hacks to be added during development. And this supposedly allows the system to make the “best interactive use of the disparate knowledge it has.”

This supposed hetarchical system:

  • “provides concrete mechanisms for heterarchical interactions and ‘institutionalizes’ and encourages forms of heterarchy like advice”
  • allows integration of modules during development (a user (the programmer) feature)

One aspect of this is the parallelism and whether that was actually better than serial methods. The MIT heterarchy thread eventually turned into Society of Mind, or at least that’s what Patrick Winston indicates in his section introduction2P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.:

Minsky’s section introduces his theory of mind in which the basic constituents are very simple agents whose simplicity strongly affects the nature of communication between different parts of a single mind. Working with Papert, he has greatly refined a set of notions that seem to have roots in the ideas that formerly went by the name of heterarchy.

Society of Mind is highly cited but rarely implemented or tested. Reactive aka behavioral robotics can be heterarchies, but are either ignored by AI or relegated to the bottom of 3 layer architectures for robots. The concepts of modularity and parallel processing have folded into general software engineering paradigms.

But I wonder if maybe the heterarchy concept(s) for cognitive architectures were abandoned too quickly. The accidents of history may have already incorporated the best ideas from heterarchies into computer science, however I am not yet sure about that.


(paid link)


  • 1
    M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986.
  • 2
    P.H. Winston & R.H. Brown, Eds., Artificial Intelligence: An MIT Perspective, vol. 1, MIT Press, 1979.
  • 3
    P.H. Winston, “Heterarchy in the M.I.T. Robot.” MIT AI Memo Vision Flash 8, March 1971.
  • 4
    E.C. Freuder, “Active Knowledge.” MIT AI Memo Vision Flash 53, Oct. 1973. http://dspace.mit.edu/bitstream/handle/1721.1/41086/AI_WP_053.pdf?sequence=4