Second-Order Cybernetics

The Curse of AI and AGI

I originally wrote this about a decade ago. I’ve updated it a little bit. Although I’m presently on hiatus hacking AI agents / robots, I thought this still might be of interest.

The Curse

The curse of AI work is to waste time implementing tools.

These days there are lots of popular “off the shelf” software tools for AI, but they don’t give you easy structures for making complete agents or robots with cognitive architectures in the vein of Strong AI research. You’ve got to do that yourself.

And if one is doing AI with embodied robots, the curse also involves making robotic hardware. All this “other” work subverts focusing on the juicy stuff.

And then you have to analyze “results!” Ugh!

However, the role of the coder—or the engineer, the hacker, the researcher, what have you—is very important in this realm, which leads to a meta-curse of sorts…

Second-Order Cybernetics

As second-order cybernetics seems to say, the researcher is inherently part of the system:

Occurring between observer and observed, interaction is the primitive from which arises what-can-be-known.1P.A. Pangaro, New Order from Old:The Rise of Second-Order Cybernetics and Implications for Machine Intelligence. 1988/2002.

And…

Not merely passive, observers are participants—in a particular sense, that they themselves have direct impact on what they see, on how they observe.1P.A. Pangaro, New Order from Old:The Rise of Second-Order Cybernetics and Implications for Machine Intelligence. 1988/2002.

But wait, what is Second-Order Cybernetics? Indeed, what is First-Order?

“First-order” refers to the first generation of cybernetics focused on questions of self-organization and automated feedback in organisms and other distributed informational networks. First-order cybernetics accounts for Wiener’s aircraft, the engineer’s perspective of controlling the output of a system through input feedback, whereas second-order cybernetics emphasizes the necessity for a reflexive participant observer to participate in the feedback. The shift to “second-order” cybernetics, then, marks the examination of observing systems2Jeon, Won. “Second-Order Recursions of First-Order Cybernetics: An “Experimental Epistemology”” Open Philosophy, vol. 5, no. 1, 2022, pp. 381-395. https://doi.org/10.1515/opphil-2022-0207

There will always be the design of the design, and the control of the control. Recognizing that is probably better than ignoring it.

Especially so with any tech that appears to be even remotely close to human level intelligence in some domain.

As developer or researcher or even reporter, consider:

How much of what appears to be human intelligence in an AI system is really human intelligence—as in the programmers, input data, interpretations from actual humans, etc. as part of the system?


Ad - Web Hosting from SiteGround - Crafted for easy site management. Click to learn more.