Do you remember Google+?
Hahahahahahaa. Things change fast. But not always as fast as some would have you believe…
Now, you’re going to see the word “Singularity” in a moment…but wait! Don’t run away! I’m not a Singularitarian.
Ok…back in 2014, I saw a skeptical post by a G+ user called “Singularity 2045” about a Machines-versus-Humans prediction in a Smithsonian interview-style article with James Barrat about a book he wrote.
Barrat’s book is titled Our Final Invention: Artificial Intelligence and the End of the Human Era. Wow. Scary!
As the Singularity 2045 person (I assume it was a human!) said:
Don’t believe the hype. It is utter nonsense to think AI or robots would ever turn on humans. It is a good idea to explore in novels, films, or knee-jerk doomsday philosophizing because disaster themes sell well. Thankfully the fiction or speculation will never translate to reality because it is based upon a failure to recognize how technology erodes scarcity. Scarcity is the root of all conflict.
…
Smithsonian even includes a quote by the equally clueless Eliezer Yudkowsky:
In the longer term, as experts in my book argue, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence. As A.I. theorist Eliezer Yudkowsky of MIRI [the Machine Intelligence Research Institute] puts it, “The A.I. does not love you, nor does it hate you, but you are made of atoms it can use for something else.” If ethics can’t be built into a machine, then we’ll be creating super-intelligent psychopaths, creatures without moral compasses, and we won’t be their masters for long.
Yikes! Some folks made it sound like we were about to be atomized and turned into paper clips at any moment!
I would like to add a couple arguments in support of Singularity 2045’s conclusion:
- Despite “future shock” (before Ray Kurzweil and Vernor Vinge there was Alvin Toffler) from accelerating change in certain avenues, most of these worries about machines-vs-humans battles are so fictional because they assume a discrete transition point: before the machines appeared and after. I don’t see that happening unless there’s a massive planetary invasion of intelligent alien robots. In real life things happen over a period of time with transitions and various arbitrary (e.g. because of politics) diversions and fads…despite any accelerating change.
- We have examples of humans living in partial cooperation and simultaneously partial conflict with other species. Insects outnumber us. Millions of cats and dogs live in human homes, bodegas, or city streets coexisting with humans with mutual benefits. Meanwhile, crows and parrots are highly intelligent animals often living in symbiosis with humans…except when they become menaces.
If we’re going to map fiction to reality, Michael Crichton techno-thrillers are a bit closer to real technological disasters, which are local specific incidences resulting from the right mixture of human errors and coincidence (and this happens in real life sometimes for instance nuclear reactor disasters). And sometimes those errors are far apart at first like somebody designing a control panel badly which assists in a bad decision by an operator 10 years later during an emergency.
This may be of interest to readers: I talked about the Us-versus-Them dichotomy and the role of interfaces in human-robot technology in my paper “Would You Still Love Me If I Was A Robot?”
I doubt we will have anything as clear cut as an us-vs-them new species. And if we maintain civilization then new variations would not necessarily be segregated / given less rights and vice-versa they would not segregate / remove our human version 1.0 rights.