*A Resurgence in Analog AI?*

Cognitive Science and AI typically subscribe to computationalism—the mind is a form of computation in the brain (or the overall nervous system including the brain).

In the 1940s, explaining cognition as the brain computing was new, and started catching on in what would become computer science and AI…and eventually to some degree neuroscience. But many were modeling the brain using what you could call analog math.^{1}Piccinini, G. (2004). The First computational theory of mind and brain: a close look at mcculloch and pitts’s “logical calculus of ideas immanent in nervous activity”. *Synthese*, *141*(2), 175-215.

And there were actual analog computers, many of which were used by the U.S. military starting in World War 2.

Nowadays, most people use *digital* computers for research and AI work…and pretty much everything. But what happened to the non-digital theories, and why aren’t there analog computers any more to experiment on those? What is an analog computer?

Digital computers represent everything with ones and zeros. These bits are unequivocally one or zero. There’s no other option. Your typical processor, at a low level, only does a few things: copy bits from one place to another, addition, subtraction and boolean logic. Everything else is built up on top of those.

But an analog computer represents “numbers” as some real activity like electricity. If you hook an oscilloscope to a stereo and play a song, you can view the analog signals that cause the speakers to make audible noise that you hear as the music. So you could think of that as a variable. And then imagine combining several of those signals somehow to get a new signal, which is the solution.

Here’s a more mathematical definition, if you’re into that kind of thing^{2}Piccinini, G. and Bahar, S. (2013), Neural Computation and the Computational Theory of Cognition. Cognitive Science, 37: 453-488. https://doi.org/10.1111/cogs.12012:

Roughly, abstract analog computers are systems whose function is to manipulate continuous variables—variables that can vary continuously over time and take any real values within certain intervals—specified by differential equations, so as to instantiate appropriate functional relationships between the variables.

## Wrong Assumptions?

Cognitive Science assumes computational debates are between two variations of digital^{3}Piccinin, G. https://philosophyofbrains.com/2022/11/25/cognitive-science-and-the-different-kinds-of-computation.aspx:

- Classical / Language of Thought (LOT)
- Nonclassical / Connectionist

Gualtiero Piccinini writes that the last 20 years have shown that assumption to be wrong as there’s at least a third major type—analog computation.^{3}Piccinin, G. https://philosophyofbrains.com/2022/11/25/cognitive-science-and-the-different-kinds-of-computation.aspx

Analog computation has seen a resurgence of interest and the field of computer science itself has further expanded to include quantum computation and various other unconventional models of computation that are nondigital in various ways.

Piccinini also wrote:

Be aware that the debate has shifted and literature that is older than 10 years is often obsolete, and even some recent literature is badly misinformed

But it probably won’t be as easy as simply switching from digital to analog.

Why? Because *neural computation*, although “analog” in a sense, is not explainable with just analog computers (at least with the definition provided in the previous section). Nor is it explainable with just digital computers—even though some aspects of neural computation may be at some level digital.^{2}Piccinini, G. and Bahar, S. (2013), Neural Computation and the Computational Theory of Cognition. Cognitive Science, 37: 453-488. https://doi.org/10.1111/cogs.12012

## Analog Neural Networks

Recently some researchers have come up with a way to make neural nets completely with analog electronics.^{4}Kendall, J.D., Pantone, R.D., Manickavasagam, K., Bengio, Y., & Scellier, B. (2020). Training End-to-End Analog Neural Networks with Equilibrium Propagation. *ArXiv, abs/2006.01981*. Using simulations of the electronics, they claim that the analog nets perform about as good as equivalent-sized digital (software-based) neural nets. They also claim “Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.”

…training these neural networks on graphics processing units (GPUs) is time consuming

and energy intensive…Programmable resistors can implement the synapses of a neural network by encoding the synaptic weights in their conductances. Such programmable resistors can be built into large crossbar arrays to represent the weight matrices of the layer-to-layer transformations of a deep neural network.

This setup presents significant advantages over multi-processors such as GPUs: the number of operations that can be simultaneously executed in a GPU is limited by its number of processors ; in contrast, a crossbar array is a massively-parallel computing device in which all resistors do parallel computations. Furthermore, since the computation in a crossbar array is in the analog domain, the power consumption is also several orders of magnitude lower than that of a GPU

## Forward-Forward

Geoffrey Hinton (famous AI dude) has proposed a replacement for back propagation that’s possibly more like how biology works. It’s called the Forward-Forward (FF) algorithm^{5}Hinton, G. (2022). The Forward-Forward Algorithm: Some Preliminary Investigations. https://www.cs.toronto.edu/~hinton/FFA13.pdf. FF has another benefit which is to simplify the components needed for those trying to do artificial neural nets with analog computers:

An energy efficient way to multiply an activity vector by a weight matrix is to implement activities as voltages and weights as conductances…Unfortunately, it is difficult to implement the backpropagation procedure in an equally efficient way, so people have resorted to using A-to-D converters and digital computations for computing gradients…The use of two forward passes instead of a forward and a backward pass should make these A-to-D converters unnecessary.

Maybe this will help drive an increase in analog-computed AI.

## Ride the Wave

The old electric analog computers from last century did not have the advantage of metamaterials.

It has been proposed to make an analog version of a DSP (digital signal processor) enabled by photonic metamaterials.^{6}Momeni, A., Rouhi, K., & Fleury, R. (2021). Switchable and Simultaneous Spatiotemporal Analog Computing. https://arxiv.org/abs/2104.10801

This “metasurface processor”—an optical wave-based computer—could be used to do parallel spatiotemporal signal and image processing. And it would be able to operate at a higher frequency than a DSP and use less power.

## New Tech

In recent years Sara Achour, an MIT and Stanford researcher, has been working on compilers called Arco and Legno that convert dynamical systems problems into analog device configurations. This could help solve a big gap^{7}Achour, S. https://people.csail.mit.edu/sachour/:

We are already seeing a proliferation of specialized devices that efficiently solve problems in machine learning, signal processing, and biology. Delivering the potential of such devices to domain specialists is a challenge: because these devices are designed for efficiency, there is a significant gap between the device’s programming interface and the programming model the end user can use productively.

A new analog computer has become available called THAT (THe Analog Thing). They claim it can be used to model many systems such as market economies, disease spread, population dynamics, chemical reactions and mechanical systems. And here’s the great part: It’s open source.

Anabrid, the German company selling the pre-made THATs, is also developing an analog computer on-a-chip.

A final foretelling from the 1990s:

The future of analog computing is unlimited. As a visionary, I see it eventually displacing digital computing, especially, in the beginning, in partial differential equations and as a model in neurobiology. It will take some decades for this to be done. In the meantime, it is a very rich and challenging field of investigation, although (or maybe because) it is not in the current fashion.

Lee Rubel, 1995