On Strong AI & Robotics

Strong AI is a Design Problem

Human-level artificial intelligence is a design problem.

Design and Specialization

“Design” probably brings to mind various professions dealing with design of form, such as industrial design, graphic design and interior design.

But the term design is also used in other form-creation disciplines, such as architecture and software-related technology. In technology, you have user interface design, interaction design and user experience design.

I do not often encounter software engineers self-styled as “designers.” However, when I hang out with people in the various related disciplines of user experience, calling oneself a “designer” is perfectly fine—there is an atmosphere of design of form. Some of them are software developers secondarily. It’s common for designers to speak of a wish to learn to code. Perhaps the design specialists are a bit more open to cross-discipline experience.

Specialized design fields presumably have a lot of overlap if one were to examine general aspects and the notion of abstractions and how abstractions are used. Yet software engineers / developers aren’t typically thinking of themselves as “designers.” And they aren’t expected to or encouraged to.

I have experienced situations in which I designed the interface of an application using primarily interaction design practices, and also developed the software using typical software practices, and no single person would comprehend that I did both of those things to any degree of competence. Professional designers in the realm of human-computer interfaces and user experience were brought in and were genuinely surprised that my product had already been designed—they expected a typical engineer interface design.

Engineers have a very bad reputation in industry for making horrible human-computer interfaces. But as I said, they aren’t expected to make good ones. Or even worse, in particular cases, no expectations for design existed at all because the people in charge had no concept of design at all.

What I’m trying to get across is my observation that design of form is not integrated with engineering and computer science disciplines, at least not to the degree that a single person is expected to be competent in both. And really entire corporate organizations that were traditionally hyper-engineering-focused have had rough times trying to comprehend what interaction design is and why that it is important for making usable products that customers want to buy.

It’s easy to point to some mega-popular companies that got the balance right from the start, at least organizationally—not necessarily for each individual in the company—such as Google. Google has a reputation as a place for smart programmers to go hack all day to make information free for all humankind or some other semi-fiction. But really it became big and has sustained that in part because of the integration of the design of the human-computer interfaces. If you don’t think of Google as a design company, it’s because you think design means stylish, trendy, “new”, etc. Transparent design is one of the best types of designs, however. If a single click helps the user, they don’t care if there was some massive amounts of design, development and computational power behind the result—they just like that it works and is useful.

You might expect to be stopped in your tracks by the mere sight of some artistic designed form and say—wow, look at that amazing design. But the truth is invisible design is an apex that most never achieve—even if they try, they find it’s difficult.

Simple is hard.

The phrase “less is more” may be trite yet it’s a good target. If you notice the interface and find yourself marveling at it, it’s probably getting in your way of actually accomplishing a goal.

It should come as no surprise that most AI implementations, and many of the theories, are now and have always been generated by persons living in the realm of computer science and/or engineering. And not in the realm of design.

Why is Strong AI a Design Problem?

Strong AI was the original AI—in pursuit of human-level cognition in machines.

You might ask: Is AI a problem at all?

Some of the early AI papers referred to “the artificial intelligence problem”—singular—such as the 1955 proposal for the Dartmouth Summer Research Project on AI1J. McCarthy, M. L. Minksy, N. Rochester, and C. E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” 1955. [Online]. Available: and a report from one of those proposers, Marvin Minsky, done the following year at MIT Lincoln Lab2“Heuristic Aspects of the Artificial Intelligence Problem.” MIT Lincoln Laboratory Report 34-55, 1956.. But even in those papers, they had already started viewing the problem often as a collection of problems. Each intelligence ability that humans can do should be able to be mechanized and therefore computerized. The problem for each ability was how to figure out what those mechanical descriptions are. Still, be it one problem or many, AI was described in terms of problems.

So, on the premise that AI is indeed a problem, why would I say it’s a design problem? Why do AI books rarely, if ever, even mention the word “design”?

AI is a design problem because there is no mechanical reduction that will automatically generate a solution for the problem.

And there is no single, best solution.

That doesn’t mean that intelligence is irreducible. That means that the creation of an artifact which is intelligent necessarily involves the artifact (which is a form) and an environment (the context). And to create an intelligent artifact in that system does not have any single directly available answer.

Form and Context

Although AI is often defined and practiced as hundreds of narrow specialist sub-disciplines, as one old AI textbook put it3E. Charniak and D. V. McDermott, Introduction to artificial intelligence. Reading, Mass.: Addison-Wesley, 1985.:

The ultimate goal of AI research (which we are very far from achieving) is to build a person, or, more humbly, an animal.

A more recent popular undergraduate AI textbook4S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Upper Saddle River, N.J.: Prentice Hall/Pearson Education, 2003. abandons all hope of human-like AI:

Human behavior, on the other hand, is well-adapted for one specific environment and is the product, in part, of a complicated and largely unknown evolutionary process that still is far from producing perfection. This book will therefore concentrate on general principles of rational agents and on components for constructing them.

So we’re starting to see how a major portion of AI research has decided that since humans are too difficult to understand and non-perfect the focus will be on ideal models that work in ideal contexts. The derision of real-world environments as “one specific environment” is interesting—if it was simply one specific environment, wouldn’t that make human-type AI easier to figure out? But it’s not an easy to define environment, of course. It is specific to areas of Earth and involves myriad dynamic concepts interacting in different ways. And, as said, it’s not perfect.

But design disciplines routinely tackle exactly those kinds of problems.

A problem is defined, perhaps very vaguely at first, which involves a complex system operating in a real world environment. Even after coming up with a limited set of requirements, the number of interactions makes creating a form that maintains equilibrium very hard.

There are too many factors for an individual to comprehend all at once without some process of organization and methods that good designers use. A bad designer has a very small chance of success given the complexity of any problem.

Yet somehow some designers make artifacts that work. Both the successes and failures of design are of importance to making intelligence systems. I’m not going to go into design methods in this essay, but I will say something about form and context.

It is an old notion that design problems attempt to make a good fit between a form and its context5C. Alexander, Notes on the synthesis of form. Cambridge, MA: Harvard University Press, 1971.. Christopher Alexander (coming from architecture of buildings and villages) described form-context systems as “ensembles,” of which there are a wide variety:

The biological ensemble made up of a natural organism and its physical environment is the most familiar: in this case we are used to describing the fit between the two as well-adaptedness.

…The ensemble may be a musical composition—musical phrases have to fit their contexts too…

…An object like a kettle has to fit the context of its use, and the technical context of its production cycle.

Form is not naturally isolated. It must be reintroduced into the wild of the system at hand, be it a civilized urban human context or literally in a wild African savanna. Form is relevant because of its interface with context. They are the yin and yang of creating and modifying ensembles.

And really, form is merely a slice of form-context. The boundary can be shifted arbitrarily. Alexander suggests that a designer may actually have to consider several different context-form divisions simultaneously.

And this ensemble division is another important aspect of design that goes right to the heart of artificial intelligence and even cognitive science as a whole—is the form of intelligence the brain, or should the division line be moved to include the whole nervous system, or the whole body, or perhaps the whole local system that was previously defined as “environment”? Assuming one without consideration for the others (or not admitting the assumption at all) is a very limiting way to solve problems. And Alexander’s suggestion of layering many divisions might be very useful for future AI research.

Someone might argue that using design methods to create the form of an artificial mind is not necessary because AI researchers should instead by trying to implement a “seed” which grows into the form automatically. I’ve offered alternative views of “seeds” for cognitive architectures myself.

However, that involves defining a context in which a new form, the seed, turns into the target form over time. But the form is still being fit to the ensemble. Indeed, we may need to solve even more form-context design problems such as the development mechanisms. One could imagine designing a prototype form, and then afterword somehow working in reverse to figure out a compressed form which can grow into a better version of the prototype form with the assistance of environmental interactions. Regardless, design was not made irrelevant.


In case it wasn’t clear by now, the form of an artificial organism includes its informational aspect.

Its mind is a form.

Creating mental forms is a design problem because there is no single perfect solution.

One can solve a design problem by creating a form that sets false all the binary “misfits” (states in the ensemble where form and context do not fit). This satisfies the requirements only at that level, not at some optimal level. It is not the “best possible” way5C. Alexander, Notes on the synthesis of form. Cambridge, MA: Harvard University Press, 1971.. There is no best possible way—if there was, it would not be a design problem.

Artificial minds do not have a best possible way either—they merely work in a context or they don’t. You could synthesize many different minds and compare them in a context one day and say “this one is the best”—but the next day a different one might be the best, and so on…