A World of Affect

Emotions and Semantic Nets

Back in the fall of 2005 I took a class at the MIT Media Lab called Commonsense Reasoning for Interactive Applications taught by Henry Lieberman and TA’d by Hugo Liu.

Screenshot from Affectworld

For the first programming assignment I made a project called AffectWorld, which allows the user to explore in 3D space the affective (emotional) appraisal of any document.

The program uses an affective normative ratings word list expanded with the Open Mind Common Sense (OMCS) knowledgebase. This norms list is used both for appraising input text and for generating an affect-rated image database. The affective norms data came from a private dataset created by Margaret M. Bradley and Peter J. Lang at the NIMH Center for the Study of Emotion and Attention, consisting of English words rated in terms of pleasure, arousal and dominance (PAD).

To generate the interactive visualization, AffectWorld analyzes a text, finds images that are linked affectively, and applies them to virtual 3D objects, creating a scene filled with emotional metaphors which you can navigate in a first person point of view.

The image files were scraped from a few places, including Eric Conveys an Emotion, in which some guy photographed himself making every emotional expression he could think of. I used OGRE for the 3D graphic engine.

Screenshot from Affectworld

So what was the point? Somebody asked that in the class and Hugo interjected that it was art.

Basically the emotional programming looks to an outsider like a pseudo-random image selector applied to cubes in a 3D world…well, that’s not completely true. With a lot more pictures to choose from (with accurate descriptive words assigned to each picture), I think that one could make a program like this that does give some kind of emotional feel that’s appropriate from a text. And 17 years later, modern generative AI with big data does exactly that.

Screenshot from Affectworld

Stories are a kind of text that explicitly describe affect: the emotions of characters and the environments enveloping the characters. AffectWorld programs would never be perfect though, because stories themselves are just triggers, and what they trigger in any given person’s mind is somewhat unique.

This is perhaps a way to think of the realm film directors adapting novels live in—creating a single visual representation of something that already has thousands or millions of mental representations. But in an AffectWorld, I simplify the problem but assuming from the beginning that the visual pictures are arbitrary. It is only the emotional aspects that matter.

At the time of the demo, some people seemed momentarily impressed, but that was partially because I made them look at a bunch of boring code and then suddenly whipped out the interactive 3D demo. Otherwise, my first version of AffectWorld was just a glimmer of something potentially entertaining.

Screenshot from Affectworld

Part of the reason why I took the class was because I was skeptical of using commonsense databases, especially those based on sentences of human text. During my early natural language explorations I became suspicious of what I learned later was called the “hermeneutic hall of mirrors” by Stevan Harnad—in other words, computer “knowledge” dependent on English (or any other human language) is basically convoluted Mad Libs. However, I did witness other students making interfaces which were able to make use of shallow knowledge for unique user experiences. Just as Mad Libs lends itself to a kind of surprising weird humor, so do some of these “commonsense” programs.

This is somewhat useful for interaction designers—in some cases a “cute” or funny mistake is better than a depressing mistake that triggers the user to throw the computer out the window. Shallow knowledge is another tool that is perfectly fine to use in certain practical applications. But it’s not a major win for human-level or “strong” AI. And in the form of semantic networks and ontologies it apparently hasn’t been a major win for the Internet either.

The Semantic Web was a similar beast as far as I can tell. The Semantic Web—the original “Web 3.0″—has been around for a long time, at least conceptually. All the way back in 2006, Tim Berners-Lee (the inventor of the World Wide Web) was telling us that the Semantic Web was the new hotness. Remember OWL? And now the semantic web is partially dead and/or partially just mixed into things without fanfare.

The Semantic Web is incompatible with the commercial incentives of most technology companies. For instance, it would currently be irrational for Facebook to voluntarily publish their social network using the friend of a friend schema. Their profit is derived from their centralized, private ownership of this data


Ask HN: What happened to the semantic web?

I think the main issue is that even though “knowledge representation” with ontologies is an enticing goal, it’s simply a fact that real entities, as used by humans at a practical level, don’t map neatly onto mathematically-sound hierarchies.. . To see this, just look at the arguments the ancient Greeks already had as to whether a human is a “two-legged featherless animal” or the endless online arguments as to whether a “circle is an ellipse” or vice versa.

Ask HN: What happened to the semantic web?

In some narrow contexts, semantic net powered apps could be smarter than humans. But they do not understand as human organisms do.

But isn’t that ultimately how the brain works, just a big messy semantic net in terms of itself?

That skips the issue of the nodes in computer semantic networks that depend on human input and/or interpretation for the meaning. And it skips symbol grounding. Somebody might argue that the patterns have inherent meaning, but I don’t buy that for the entirety of human-like meaning because of our evolutionary history and the philosophical possibility that our primitive mental concepts are merely reality interfaces selected for reproductive ability in certain contexts.

Epilogue

At the time of the commonsense reasoning class—and also Marvin Minsky’s Society of Mind / Emotion Machine class I took before that—a graduate student named Push Singh was the mastermind behind Open Mind Common Sense.

Although I was skeptical of that kind of knowledgebase, I was very interested in his approaches and courage to tackle some of the Society of Mind and Emotion Machine architectural concepts. His thesis project was in fact called EM-ONE, as in Emotion Machine 1, dealing with levels of cognition and mental critics. I attended his defense. I didn’t know him very well but I talked to him several times and he had encouraged me to keep the dialogue going. I recall one day when I reading a book about evo-devo in the second floor cafe at the Harvard Co-op bookstore, ignoring all humans around me, Push happened to be there and made sure to say hello and ask what I was reading. He said to get in touch sometime.

One day I went to his website to see if there was anything new, and found a message from somebody else posted there: Push was dead. He had committed suicide. Below that, stuck to my computer monitor, lurked an old post-it note with a now unrealizable to-do: “Go chat with Push.”

Subscribe to my newsletter!