There has been much in the news lately about the threat of artificial intelligence, or “AI.” Artificial Intelligence is defined as the simulation of human intelligence processes by computer systems. This should not be confused with artificial nonintelligence (AN), which is the simulation of human intelligence by some members of Congress.
Anyone who has watched Arnold Schwarzenegger’s “Terminator” movies knows about the impending “war against the machines,” so statements about the threat of AI should come as no surprise. Still, Theara Coleman of Yahoo News recently wrote that some researchers feel “AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down.”
And just several days ago (May 30), two gentlemen who should know about such things issued a warning about the threat of AI. In a statement posted on the website for the Center for AI Safety, Sam Altman, CEO of OpenAI, which developed the conversational chatbot ChatGPT, and Geoffrey Hinton, sometimes called the ‘Godfather of AI,’ joined hundreds of other tech leaders to sign a 22-word “Statement on AI Risk”: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I don’t mean to be alarmist here, but they used the word “EXTINCTION!”
Interestingly, however, no one currently fretting over the power of AI seems to be very specific about exactly how AI would accomplish this extinction. That might just be a display of caution on their part — why give AI any hints about how to do it? With the current processing speeds of computers, keeping AI in the dark should buy us at least 24.5 nanoseconds more time before the apocalypse.
But seriously, the threats seem to arise in two areas. One example still involves human minds. In such a scenario a nefarious human actor (think supervillain Ernst Stavro Blofeld in the “James Bond” films) uses AI to create something such as a bioweapon that is more deadly than anything humanity has seen thus far.
The other threat scenario is AI doing what it is supposed to do but doing it in a way that is unintended. An example here would be a situation similar to that in which the supercomputer HAL 9000 in the film “2001: A Space Odyssey” determined it could only accomplish its mission by eliminating the human crew of the spaceship Discovery One. Think of Earth as a spaceship and humanity as the crew — as Homer Simpson might put it, “Whoo-hoo! Problem solved!”
The big question to me is whether the cat is already out of the bag, or, to put it in terms AI can understand, Schrodinger’s cat is already out of the box and simultaneously not. After all, I have a smartphone that frequently turns against me. In keeping with current trends in dealing with AI, I have named my phone Mona. Mona goes by the pronoun “it.”
A few weeks ago, for example, Mona began playing a perky little ditty that began faintly but then at increasing volume with each repetition. Mona did this at three in the morning! This wasn’t an incoming call. After blearily fumbling with Mona for about five minutes, I discovered it was the alarm clock feature.
I had not set the alarm for 3 a.m. In fact, I had not used the alarm feature for about 18 months, when we needed to get up at 3 a.m. to get to the airport by 5 a.m. so that the airline’s employees in Chicago could “sleep in” before dealing with our arrival at 2 p.m. their time. But I digress.
Why Mona decided the alarm needed to go off was not readily apparent. Nor was Mona forthcoming in how to turn the alarm off. My initial attempt resulted in my inadvertently tapping the snooze button, so Mona started playing the perky little ditty again 15 minutes after I fell back to sleep.
Another example of Mona outsmarting me happened just the other day. Normally I receive five or more “Spam Risk” calls a day. But on this particular day, Mona had not rung at all while I was at the other end of the house and could not sprint fast enough to reach Mona until just after it stopped ringing. Thinking this was highly unusual, I began to cautiously explore Mona’s settings.
Somehow Mona had set its ringtone to “off.” It must have been Mona. I did not set Mona’s ringtone to “off.” In fact, I strive to never go anywhere near Mona’s settings if possible. Why Mona did this to itself, I do not know. Perhaps Mona had a migraine headache and needed complete silence.
A third example of Mona’s rebelliousness occurred about a month ago. After wandering through the grocery store with Mona in my pants pocket, I discovered that somehow the camera had been turned on. I did not turn on the camera. Mona must have done it.
I do not know why Mona wanted to use its camera or what Mona might have photographed or videoed that afternoon. All I know is that it seemed kind of creepy. And I can only hope Mona was smart enough not to post it to the web.
UNDERWRITING SUPPORT PROVIDED BY