I’ve posted some skepticism of the new AI models that are getting all the press—and all the money—in the past year.
I said in “When AI Phones It In” that a lot of the fear of jobs being taken away is vaporous.
And in “The AI Winter Shit-Winds Are Coming” I suggested we might be heading for an AI crash. Which would be unfortunate since we’ve been riding crashes in the markets already for a couple years.
Last month, in “Smells a little bit like AI winter?” writer/scientist Gary Marcus asked if the simultaneous “implosion” of AI failures at Tesla, Google and Microsoft could lead to an AI Winter. And that they are not different issues essentially because they all depend on big data + deep learning.
Famous roboticist, AI researcher and writer Rodney Brooks has recently downgraded his AI predictions:
I was accused of being a pessimist, but I viewed what I was saying as being a realist. In the last couple of years I have started to think that I too, reacted to all the hype, and was overly optimistic in some of my predictions. My current belief is that things will go, overall, even slower than I thought five years ago. That is not to say that there has not been great progress in all three fields, but it has not been as overwhelmingly inevitable as the tech zeitgeist thought on January 1st, 2018.
Independent journalist Kyle Becker recently asked, “Are Transportation Disasters on the Rise or Is It Just Our Imagination?“:
Helicopter crashes. Train derailments. Explosions. Chemical fires. The U.S. military shooting down ‘UFOs,’ as well Chinese spy balloons. What in the hell is going on?
Some of this is just perception. But in terms of train derailments, it’s been fairly constant over the years. But the order of magnitude of tens of derailments per year, some of which are Ohio levels of ecological disaster, is still crazy and not at all a good number.
Meanwhile, Americans are concerned about deteriorating infrastructure leading to more such disasters. The statistics thus far in 2023 show that they have a right to be concerned; the uptick is not merely a figment of our imagination due to heightened sensitivity or a lack of proportion following the supply chain disruption during the Covid pandemic slowdown.The Wildfire Newsletter
Now what would transportation specific disasters have to do with AI?
Well, simply, the public might start distrusting a large swatch of technology, especially when there’s been an overlap in AI with transportation during the past decade and especially in the past couple years in the mainstream channels with Tesla and other big bucks autonomous car and delivery projects.
Plus, people hear about AI used for military drone and auto-pilot fighter jet projects, and even if those are just applied research, it’s gold for scaremongers.
Likewise with robotic dogs. Every time they appear in the news or on a recycled social media post, and often with those irresponsible cgi’d fake-out videos which still circulate years later, causes not just scaremongering but downright hate and threats of violence against the drones even when humans are always in the loop of control. Indeed most industrial-strength mobile “robots” sold are teleoperated vehicles with optional and/or low levels of autonomy.
In Los Angeles, activist groups People’s City Council and Stop LAPD Spying Coalition are literally calling the Boston Dynamics Spot platforms “torture devices” such as in public comments at a recent LA city council meeting and on Twitter. In this specific case they come across as luddites that pretend to care about the safety of citizens without proposing any alternative positive approach aside from the negative—banning all teleoperated robots in public services. They successfully delayed a trial adoption of it (literally a single robot) for 60 days. The same types of people completely prevented New York Police from using the same Spot model and then in LA claim there was public outrage that stopped it, as if their own manufactured dissent based on exaggerated and fictional claims is itself something to cite in some kind of tautological circus.
I almost feel sorry for Boston Dynamics’ sales people who have to deal with this despite all the use cases for the Spot robot dog and the anti-weapon ethics of Boston Dynamics itself:
We will not sell to nor partner with those who wish to use our robots as weapons or autonomous targeting systems, and as clearly outlined in our terms & conditions, any use of our robots to harm or intimidate people or animals is strictly prohibited. In addition, any use of our robots must be in compliance with all applicable privacy and civil rights laws. We will take action to mitigate any misuse of our products.BD Public Safety Solutions
Public sentiment is not my expertise—I’m just spitballing here. But based on what I’ve seen it’s memetic. It’s contagious. And this could contribute to a general public fear of robots and AI which could translate into less money put into those areas.
Show Me the Money
But for now the money keeps flowing in…at least for some companies.
‘They stopped investing in blockchain, they stopped investing in metaverse or whatever, and everybody’s seeking A.I. deals,’ explained Ari Lightman…’AI has been around forever, all of a sudden, it’s hot because of ChatGPT.’Tech with Daniel Howley
Tech and finance seem to hunger for this…they need something and this is it as things like crypto and web3 float away in the midst of the past couple years of economic problems. And remember, the growth in US employment has been in things like hospitality. Tech and adjacent industries keep on doing layoffs after layoffs. Peoples’ 401k and pension plans have been suffering. So a lot of folks, even if they don’t realize it, could benefit from continued financial hype in AI.
With the end of the Second Tech Boom, much of the excitement — and money — has gone out of the consumer internet space, especially at the VC level. There was a weird little interregnum in 2020 and 2021 when suddenly all the money was in crypto, and the AI folks — who had thought it was their time — were sort of hanging around watching all these weirdo interlopers show up throwing around stacks of cash, and wondering if they had gone into the wrong thing.
Then crypto crashed, the weird people who had been hanging around town mostly vanished, and ChatGPT and AI art came out and wowed the whole world. The natural order of things righted itself, and AI continued its serene, implacable march to supremacy.Noah Smith
I hope the money keeps flowing but I wish they’d diversify the field of AI itself. And AI has already infiltrated—it’s everywhere in terms of a concept in tech. The San Francisco area is now called “Cerebral Valley”:
In this small area, you can find “hacker houses” where the new crop of techies lives, “third spaces” where they co-work and hang out, and cafes and restaurants and bars that they frequent.Noah Smith
I wanted to add a quote from Cory Doctorow’s essay “The AI hype bubble is the new crypto hype bubble” he published at the same time as I published this one:
the most exciting part of the “internet in the ’90s” was that it had incredibly low barriers to entry and wasn’t dominated by large companies – indeed, it had them running scared.
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to “This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves.” Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete – not our new robot overlord.