Where Were You When the Modern AI Hype Train Derailed?
“Do you know what a shit barometer is? Measure’s the shit pressure in the air…listen Bubbs, you hear that? The sounds of the whispering winds of shit…”
—Trailer Park Boys
And yet, despite the Argo AI shutdown, and so many self-driving car startups disappearing and/or getting absorbed into larger companies over the years (e.g. Optimus Ride, NuTonomy), and Uber giving up on self-driving, Alex Kendall’s company Wayve is still standing. He tweeted recently about “breakthroughs” AI has made “this year” although they appear to be spanning several years.
Alex continues the thread about accomplishments of his own company including referring to their neural network driven cars as “Embodied Intelligence.”
Is it really embodied intelligence? Is it that different than other autonomous vehicle tech? Or is Wayve just co-opting the term to sound cool?
The difference is supposedly that Wayve’s tech stack can generalize to new types of cars and to new cities without specific training. I guess we’ll see how generalizable it is as they keep developing, assuming the company doesn’t meet financial doom. What happens when you train in London and then try to drive in the winter of Canada?
That’s why Waymo’s claim about their AI program having driven millions of miles on public roads is so hollow. The program has to have that amount of data to come close to human levels of performance in realistic driving scenarios. And even then, it’s trivially fragile. It can’t drive accurately without insanely detailed maps beforehand. It can’t handle inclement weather. It’s sensitive to very small changes in the environment.
And then there’s the Leakage and the Reproducibility Crisis in ML-based Science, where “ML” means the specific field of Machine Learning that has kind of taken over AI.
And yet there’s been progress in autonomous cars, just nowhere near the promises of the past.
Roboticist, MIT professor (emeritus) and entrepreneur Rodney Brooks, who has a history of being skeptical of whatever non-robust AI happens to be popular, rode in self-driving Cruise vehicles in San Francisco earlier this year. He reported they actually have an MVP (Minimal Viable Product).
But don’t get over-excited yet as Brooks also said:
please don’t make the mistake of thinking that an MVP means that mass adoption is just around the corner. We have a ways to go yet, and mass adoption might not be in the form of one-for-one replacement of human driving that has driven this dream for the last decade or more.
Given unlimited resources I think we could find solutions to most of the practical issues and at a faster rate. But there are limited resources, as least in terms of funding. The flattening of AI where we put all our eggs in a few baskets may be the biggest hindrance and might lead to a new AI Winter, reducing funding dramatically.
It hasn’t happened yet, at least for new startups (which may disappear tomorrow). AI is still hot in the venture capitalist world even though there’s an overall VC pullback in the market. But can they keep coming up with hot marketable ideas like the current fad of generative AI?
…which has been obvious since getting funding anywhere for AGI or Strong AI or GOFAI is very hard and not in many organizations’ best interests.
Money dictates the direction of progress. “People didn’t shift to building industrial AI after they got tired of failing to build intelligent machines…Most never cared about the latter.”
In my opinion there wasn’t a big change in AI’s “true goal” (as if there is such a thing—after all specifying Strong AI or AGI instead of Weak or Narrow has been around for a long time) there has simply been a change in terminology: everybody started using “AI” for applied AI and ML. And marketing latched on to and accelerated that terminology evolution.
It used to be said—I heard this a long time ago from Rodney Brooks—that whenever a part of AI works it simply becomes part of “computer science.”
Why are deep learning technologists so overconfident? “Hype is nothing new to machine learning, but this wave seems different. Billions of dollars in funding have been allocated based on this hype, and it has led to a massive amount of public confusion.”
“Despite significant and growing efforts, including corporate investment in ML projects and initiatives for the past few years, only a fraction of ML models reach production and deliver tangible results.” said Ed Fernandez in 2020, but also that the fraction was profitable.
He also said that “ML is quickly going beyond the hype cycle peak and mainstream adoption in the Enterprise is expected to be only 2–3 years away.” It has now been almost 3 years—are we beyond the hype cycle now? And if so, is a large portion of AI work (and workers) about to head into an abyss?
According to Gartner (via this article) in 2021, AI was quickly headed to the “Peak of Inflated Expectations.” Well over a year later, perhaps AI is now just over that Peak and careening into the “Trough of Disillusionment.”
There have been at least two AI winters in the past since AI’s inception in the 1950s (at least under the name “Artificial Intelligence”), where public and research expectations had not been met resulting in periods of funding cuts, less R&D and general disappointment.
AI winter is well on its way (2018):
Predicting the A.I. winter is like predicting a stock market crash – impossible to tell precisely when it happens, but almost certain that it will at some point. Much like before a stock market crash, there are signs of the impending collapse, but the narrative is so strong that it is very easy to ignore them, even if they are in plain sight. In my opinion there are such signs of a huge decline in deep learning (and probably in AI in general as this term has been abused ‘ad nauseam’ by corporate propaganda) already visible.
I do not want to be an AI Winterest but the overemphasis and hype on a small set of super-narrow approaches over the past decade in combination with the economy are potentially leading to an upcoming, maybe already started, winter of sorts.
Don’t get me wrong, this is a pro-AI blog—I’m a supporter of software and hardware innovation including those that may be called “AI” as well Strong AI research. To hedge against a new Winter, I’d recommend more people get on board investing in and developing various forms of “robust AI” which could have a lot of overlaps between basic research and applications.