What Could Go Wrong? Introducing A.I.’s Risk of Extinction to Humanity
“What nukes are to the physical world, AI is to the virtual and symbolic world.” Yuval Noah Harari
Last week, a few hundred of the world’s top scientists, technology leaders and CEOs’ signed a statement urging greater caution towards artificial intelligence (A.I.) and the risks it posed to humanity’s extinction. These experts, which included the CEO of Open AI (the maker of ChatGPT) Sam Altman and Geoffrey Hinton (someone widely acknowledged to be the “godfather of AI”), put their names to a letter released by California-based non-profit the Center for AI Safety.
The key passage in that statement was:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I hope to unpack what this means or at least why these (very) smart folks are openly declaring their concern about the risks of A.I. In a sense, this isn’t new at all. Back in 2015, the Future Life Institute already released a statement to the effect that, “50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI.”
However, most people when they see such statements immediately have images of Skynet (from Terminator) or the machines from Matrix pop into their heads. Whilst having a sentient supercomputer destroy the very same humanity which created it is not entirely impossible, it’s admittedly sensationalistic and thus may shut down clarity on AI’s problems.
And there are problems.
Curation AI: What has already gone wrong?
The point is that many years ago the harmful effects of A.I. were already manifesting themselves. The phenomenon of curating was impossible AI and whilst brought many good things like personalization of feeds, automatic recommendations, smarter predictions of trends, better sustained attention and engagement, etc, it also allowed social evils like information overload, device addiction, doom-scrolling, hyper-sensitivity, shortened attention spans, political-partisan polarization, fake news, etc
Note that everything in the previous paragraph has already happened and, for some people, it has caused heavy (if not irreparable) damage. One of the reasons why Malaysia’s mental health challenges keep rising is because young (and not so young) are spending so much time on Twitter and Instagram. Hooked by these platforms’ AI engagement and prediction machines, that they are unable to engage healthily and productive in the real world, becoming increasingly disassociated and hyper-sensitive to the slightest online provocations.
A.I. analyst Aza Raskin names the above phase of AI Curation AI, a phase which has already caused many problems.
But this year we’ve seen the popular launch of what Raskin calls Creation AI.
Creation AI: What else can go wrong?
When Large Language Models (LLM) like ChatGPT help us write school essays, solve urban problems, fix coding, recommend diet plans and so on, it’s essentially AI making stuff for us (whereas previously its main task was selecting items).
And it is precisely this new ‘ability’ which makes many people nervous. LLM’s superpower is they can ‘translate’ any domain (eg, images, brain waves, etc.) into text-based languages which they can later manipulate and reproduce towards any particular direction or objective.
Put simply, because of the ubiquitous digitization of reality, any entity can absorb and accumulate the bulk of the ‘text’ of digitization can then proceed to re-create, manipulate or destroy said reality however it wishes.
Today, that entity would be A.I.
Because of these capabilities, we are already seeing scary scenarios like the use of deep-fake to commit crimes. Eg, someone could record a few seconds of a child’s voice then use AI to manipulate that voice when speaking to the child’s mother on the phone. Can you imagine receiving a call from someone sounding like your child, claiming she’s been kidnapped?
In a similar vein, verification models would be up for grabs. E-ticketing, e-certificates, facial recognition, and even DNA information can all be replicated and duplicated. This has led to the joke that in the future we’ll need to assume that online and digital personalities are fake; a premium would then certainly be paid for counter-falsification AI systems. Yet what’ll happen when fake and anti-fake AI systems compete? God only knows.
Other potential hot potatoes with creation AI are brain-hacking and reverse dream-decoding, two non-impossible processes when AI can manipulate, translate or even ‘upload’ brain-patterns. Imagine an AI system able to ‘delete’ certain neurological components. This could help in curing anxiety and depression but, well, what could go wrong?
Advanced interaction capabilities may also be troubling. It’s great if a 16-year old can speak casually to his Bing 4.0 system about his homework but what if he starts talking to an agentic AI (whose communication abilities make it indistinguishable from a human) about sex, money, revenge, and so on? Coupled with AI bias, the results could be potentially dangerous.
Now for the bad news.
All the above spells trouble, but nothing is more troubling than AI’s emergent capabilities. As is well-known, machine learning trains itself on terra-bytes of data and continuously improves. What’s ultimately scary is a) how fast these systems can learn and b) what happens if they turn agentic (i.e. they develop objectives independent of their creators).
Here’s Aza Raskin again on what AI can do:
“Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime. Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory, and then fish all the fish to extinction.”
Long and short, we simply do not know what the latest AI systems will be capable of by itself.
A final question: What if very powerful AI is put in the wrong hands?
Imagine sinister parties wielding mega machines capable of high-level exploitation of code and cyber weapons or blackmail or even the creation of fake religions to radicalize already vulnerable pre-extremist groups. The concern with extinction here isn’t so much the Skynet-ish scenario of AI wiping out humans; it’s the possibility that humans wipe each other out weaponizing LLMs’ which are capable of performing complex feats at levels no person or society can outdo.
I hope it’s clear now why some of our best minds are freaking out, or at least sounding a warning. I’m not a big fan of Harari but I gotta admit that line of his at the start is worth reflecting on: The line from ChatGPT to a nuke may not be that straight or clear — but that line exists.
We’d be wise to keep an eye on it.