AI: The Good, the Bad and the Ugly

A Wide Range of Societal Effects from LLMs

The Good

Paul Pallaghy points out the unexpected benefits of LLMs (Large Language Models).

I can’t find any evidence for strong predictions of LLMs being this cool. And I don’t recollect any myself…So, prior to GPT-1, I don’t think anyone really remotely predicted the insane success that LLMs would achieve so relatively quickly.

There are many “bullshit jobs,” and even many non-BS jobs involve bullshit tasks. In ChatGPT: A Bullshit Tool For Bullshit Jobs, Alberto Romero points out that although ChatGPT is essentially a bullshit generator, that’s great for humans because they can automate their bullshit jobs and bullshit tasks:

Now I get it: ChatGPT allows them to escape what I’ve been avoiding my whole life. People are just trying to “recapture some sanity” with the tools at their disposal as I do when I write. Whereas for me, as a blogger-analyst-essayist, ChatGPT feels like an abomination, for them—for most of you—it couldn’t be more welcome.

The Bad

It would seem that all technologies are double-edged swords.

Do half of AI researchers believe that there’s a 10% chance AI will kill us all? No. This comes from a survey sent out to a few thousand folks who authored papers at two specific ML conferences. 17% chose to respond. And only a portion of those got this specific question. As Melanie Mitchell (author of the linked article) asks, is there a bias of the 162 authors who responded to that question? And even for those that did reply with the 10% number, what does that even mean? Where does the 10%, or any chance value come from? What is the time span for this question? It’s not even engineering level WAG (Wild Ass Guessing) and it’s for a fictional concept that’s never actually happened in history.

But, Is ChatGPT getting worse? Apparently this claim has been going around based on misinterpreting a particular paper. The blog AI Snake Oil concludes:

In short, the new paper doesn’t show that GPT-4 capabilities have degraded. But it is a valuable reminder that the kind of fine tuning that LLMs regularly undergo can have unintended effects, including drastic behavior changes on some tasks. Finally, the pitfalls we uncovered are a reminder of how hard it is to quantitatively evaluate language models.

Ok so is there anything really bad then? Maybe.

Millions of jobs might be displaced around the world. In No, AI Isn’t Going to Kill You, but It Will Cause Social Unrest — Part 1, Mark A. Herschberg writes:

Articles about AI overlords may be great clickbait, but they distract us from the more likely and more immediate impact on the labor market. While long term AI is likely to cause more access and prosperity in the long run, as technology often does, the level of short-term disruption is unprecedented. Governments and society can minimize this impact if politicians can work to take the necessary steps, if they are willing to act. OK, now you can despair.

As a supporter of UBI (Unconditional Basic Income), I found this Pasadena City Council meeting clip hilarious but also more serious than you might think: The Government Needs to Protect us from A.I. | Chad and JT.

I, for one, support Chad and JT’s proposal. Obviously you can’t just tell “the AI” to pay us 10k a month, but through systematic changes we could essentially achieve that.

The Ugly

Image by BrownMantis from Pixabay

We’re already seeing some pushback and the desire to filter out AI-generated texts. For instance, Medium is going to start rewarding non-AI generated articles: New Partner Program incentives focus on high-quality human writing.

But the amount of AI-generated text on the internet might be bad for future AIs as well. As models train on data that includes previous AI-generated content, it’s kind of like informational inbreeding, which can result in model collapse.

“Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further,” wrote one of the paper’s leading authors, Ilia Shumailov…“We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned.”

VentureBeat

In a May 2023 preprint, researchers wrote:

What is different with the arrival of LLMs is the scale at which such poisoning can happen once it is automated. Preserving the ability of LLMs to model low-probability events is essential to the fairness of their predictions: such events are often relevant to marginalised groups. Low-probability events are also vital to understand complex systems…


Our evaluation suggests a “first mover advantage” when it comes to training models such as LLMs. In our work we demonstrate that training on samples from another generative model can induce a distribution shift, which over time causes Model Collapse. This in turn causes the model to mis-perceive the underlying learning task.

Meanwhile, famous writer and AI researcher Douglas Hofstadter—who I’ve mentioned previously in Recursion and the Human Mind, What are Symbols in AI?, Mechanisms of Meaning and Cognitive Abstraction Manifolds—seems to have changed his mind on deep learning & AI risk:

But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard.

It’s not clear whether that will mean the end of humanity in the sense of the systems we’ve created destroying us…but it’s certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

As a cockroach, I, for one, welcome our new ChatGPT AGI overlords. Obviously I’m not as worried as Hofstadter. But it is creepy to have someone who is much smarter than me give these statements.


Ad - Web Hosting from SiteGround - Crafted for easy site management. Click to learn more.