banner
Home / News / AI and You, Part 4: The end of humanity?
News

AI and You, Part 4: The end of humanity?

Aug 02, 2023Aug 02, 2023

August 2, 2023Special to the Herald TimesColumns, County, Opinion0

Part 1: Guest Column: Artificial intelligence and you, Part 1

Part 2: How AI and ChatGPT work, Part 2

Part 3: AI and You, Part 3: How to use ChatGPT wisely

RBC | In this series of articles I tried to expose the influence of artificial intelligence on our lives. I described the new capabilities of ChatGPT and a bit about its inner workings, the neural networks. I listed likely uses and misuses. All along I’ve returned to its surprising, emergent behaviors. That’s where I’ll wrap up. ChatGPT and its cousins arguably represent a new intelligence rivaling our own human capacities and potentially extending beyond them. Enhanced versions of ChatGPT may present a very real threat to human existence. It’s not science fiction. Thoughtful people among the no-nonsense engineers who developed these systems are worried. At minimum we need to identify potential threats and figure out how to deal with them.

The British mathematician Alan Turing invented the blueprint for a general computational device in 1936, what is now referred to as a Turing machine and which blueprint is the foundation for modern digital computers. Before Turing, computers were specially wired to solve particular problems, e.g. find the trajectory of an artillery shell fired at an angle 45 degrees and a velocity 500 meters per second. To solve another problem, you had to re-wire the computer. Turing enabled modern devices in which all components, processors and memory, use the same information, zeros and ones, all controlled by software instructions. Need to add and subtract numbers in a spreadsheet? Just write the computer code to do that. Need to edit a text document? Write the appropriate code. With a Turing machine (i.e. all our digital devices) you don’t need a different computer, just different code.

Even way back then people worried about the machines and their capacities. How can you tell if a computer has reached human intelligence and might do (sometimes not so nice) human things? How do you know if you’ve built a brain? Turing’s response was the Turing test. Place a real person, labelled P, and a computer, C, behind separate curtains. Subject them to a series of interrogators. Professor Smith asks questions from his field of interest, listens carefully to the responses from P and C. Doctor Jones comes in when Smith is done and asks P and C a list of questions from her realm of expertise. Journeyman Frederick takes over from Jones, and so on. After their interrogations, the interviewers compare notes. From the answers provided by P and C, can they determine which is the human, which is the computer? If not, you’ve got an intelligent machine.

By pretty much any measure, ChatGPT passes the Turing test. And it has surprised even the cognitive scientists who spend their lives studying the human brain. It succeeds at “theory of mind tasks,” and it learns new behaviors that by all appearances extend beyond the boundaries of its training. I won’t describe theory of mind tests here; if you’re interested you can find an example in David Kestenbaum’s podcast (Kestenbaum, 2023). I will, however, relate another of ChatGPT’s mind-boggling emergent behaviors.

The engineers that gave ChatGPT the stacking problem I described a couple weeks ago also devised this one. “GPT-4.0, draw me a unicorn.” Ha! Stumped it for sure with this time. GPT trained on text. It’s not Dall-E, the AI painter. It can’t draw. No way it’s going to draw a unicorn.

It drew a unicorn. It’s not the fanciest unicorn of fairy princess tales, but it is a recognizable unicorn. Four legs, kind of blocky, oval body, smaller ovals for a tail, a head with a pointy triangular horn colored gold so you can’t miss it. As if GPT is pointing out, “See, I know what’s important about unicorns.”

How did it do that? Well, it learned all about unicorns from reading unicorn tales in its web training. What they look like, why they’re different. And it can write computer code. It can’t draw, but it can write code. So it wrote the code to draw a unicorn. Think about that. It learned how to draw. It inhabits a computer. It wrote the instructions for its computer to draw a unicorn.

It has solved other apparently impossible tasks. Tell me that’s not amazing.

And scary. It has human capacities. It has a vast store of knowledge (the entire World Wide Web of information). It can solve problems. It can learn new behaviors. All it lacks, apparently, are self-awareness, agency and intent.

Cue Also Sprach Zarathustra, the opening music for Stanley Kubrick’s movie 2001: A Space Odyssey, based on the novel by Arthur C. Clark. A crew of astronauts flies toward Jupiter on a mission to investigate a mysterious obelisk and its signals. The craft is controlled by the onboard computer, HAL. HAL starts acting weird, tries to eliminate the human crew and carry out a plan of its own. Only when Commander Bowman finally cuts HAL’s power source is the mission saved.

Are we at similar risk? Spacecraft Earth and all humanity this time? Is it possible that future versions of GPT will take over? Even the engineers who created GPT take the threat seriously and recommend a pause in development.

ChatGPT has demonstrated obvious emergent behavior, and we do not fully understand how it works. It mimics the human brain in that regard. Brains produce surprising and unexpected behaviors: creative new ideas, unexpected and surprising thoughts and actions. GPT is modeled on the brain: that was the blueprint for neural networks in the first place. Neural networks still only approximate the wiring in the human brain, but their connectivity now approaches that of the brain. What new behaviors will we see when we add another few layers to the neural nets? We don’t know. (See Geoffrey Hinton, 2023.)

We do recognize obvious threats already posed by ChatGPT and its AI cousins. Previous articles discussed questions of authorship, for example. How does a college admissions officer know if a student wrote that essay or if it was ChatGPT? More seriously, how do citizens know whether a news clip came from an actual interview or if it was produced by generative audio and visual AI? That arguably is the greatest immediate problem. We’ve already witnessed an erosion of fact-based understanding in a social media milieu of “alternative facts.” If society loses a common, agreed upon set of facts on which to base decisions, we lose society. Further suppose ChatGPT sharpens its skills at manipulating human behavior. Take up arms against the government, say. Foment a rebellion. Other social media already have demonstrated capacity to do such things. ChatGPT trained on the vast library of human knowledge, including Machiavelli and Mao and all the alternative-universe pundits on today’s social media. It may get better at manipulating us humans than the “influencers” already are. ChatGPT and cousins may, probably will, cause further mayhem and confusion at the hands of malicious actors.

Other nightmares to keep us awake at night: AI systems monitor the electric grid and early warning radar systems and nuclear power facilities and bank transactions and etc., etc. in our digital world. Suppose a self-aware AI electric grid controller decides the security of its power source is more important than air conditioners in the Southwest and shuts off electricity there in the midst of a heat wave. Or suppose the AI radar system decides it would be fun to play games with the night crew and conjure a whole bunch of North Korean missiles on their monitors?

More troubling, and immediate, what controls should we place on AI weapons systems? U.S. and Iranian and Russian and everybody elses’ battlefield drones employ AI guidance and targeting. Should AI be released also to pull the trigger, make the decision to launch a missile? They are a whole lot quicker making split-second decisions than the humans flying them with joysticks back home. Who should make lethal decisions: the humans or the AI?

Then there’s the economy. It is certainly true that robots have taken blue collar jobs in assembly lines. ChatGPT now threatens a whole bunch of white collar and professional jobs also. If ChatGPT can write quality legal reports, who needs a legal secretary? If AI can interpret MRI scans, who needs a radiologist? Doom and gloom employment predictions are probably overblown, but see, for example, recent studies by the MIT Sloan School and Bureau of Labor Statistics (Acemoglu, 2020; BLS, 2022).

Another timely example from today’s headlines: Hollywood writers and actors are on strike, protesting among other issues the advent of ChatGPT and production AI into the film industry. Why would the major producers hire screen writers when they can just ask GPT to write the script? Why hire actors when AI can produce the whole shebang? Take a look at the latest release in the Raiders of the Lost Ark series. There’s Indiana Jones of 50 years ago, generated by AI, in the same film as 81-year-old Indiana Jones (Harrison Ford) today. With audio and video production AI, film producers can assemble a cast including stars of yesteryear like Buster Keaton, John Wayne, and Katherine Hepburn along with relative youngsters Tom Cruise and Meryl Streep all in the same movie, all with their authentic voices, each rendered by AI at any age you choose. Script, of course, written by ChatGPT.

These are serious considerations. We have to figure them out, and soon. As several of the AI gurus have pointed out, when AI is smarter than we are we won’t be able to pull the plug. It will always be a step ahead. It will always make sure it controls the plug.

On the other hand, there’s also hope that ChatGPT may save us.

Consider. Evolution has produced a species, Homo sapiens, capable of building other thinking machines. Those machines can tolerate environments far harsher than we can. They can endure extremes of temperature, lack of water, lack of oxygen, harsher radiation environments, high pressure, vacuums – conditions that flesh and blood would never survive.

Think of the Voyager probes, little robots launched 45 years ago now sailing far out beyond the edge of our solar system into interstellar space. (JPL, 2023.) They still send signals telling us what’s out there, and they carry a record of humanity etched on a gold-plated copper disk. They are scouts and messengers out where no human can go. Maybe that’s what we need, Voyager upgrades right here on the home planet.

There are those who argue, and not without evidence, that humans are making hash of planet Earth. We destroy entire ecosystems. We are baking and burning and drowning our kin. Driven by impulses from a brain wired by millions of years of eat-or-be-eaten survival mode, we are now required to solve problems rationally or else perish. Maybe the GPT’s, cooler and more objective thinkers (at least potentially), can solve existential problems better than we can. At least they are more likely to survive on the smoldering remnants of our planet than we are, if we in fact set off all the nukes or succeed in heating the atmosphere and the oceans beyond survivability. Maybe, long after we’re gone, they’ll even send other far more sophisticated probes out to colonize the galaxy, fulfilling a human dream. Maybe they’ll even carry the complete record of humanity along with them, written in neural networks instead of gold-plated disks.

(Note: Bob Dorsett wrote all four of the articles in this series, not ChatGPT. Honest. He included quotations from various ChatGPT sessions for purpose of illustration. Those quotations were clearly identified. The rest of the writing was his. Really. Just in case you were wondering…)

References:

Acemoglu, Daron, and Pascual Restrepo. 2020. Robots and jobs: evidence from U.S. labor markets. Sloan School of Economics, MIT. https://economics.mit.edu/sites/default/files/publications/Robots%20and%20Jobs%20-%20Evidence%20from%20US%20Labor%20Markets.p.pdf

Bureau of Labor Statistics. 2022. Growth trends for selected occupations considered at risk from automation. https://www.bls.gov/opub/mlr/2022/article/growth-trends-for-selected-occupations-considered-at-risk-from-automation.htm

Hinton, Geoffrey. 2023. Why Geoffrey Hinton is worried about the future of AI. https://www.youtube.com/watch?v=-9cW4Gcn5WY&ab_channel=UniversityofToronto

Jet Propulsion Laboratory, NASA. 2023. Voyager Mission Control. https://voyager.jpl.nasa.gov/

Kestenbaum, David. 2023. Greetings, people of earth. First contact. https://www.thisamericanlife.org/803/greetings-people-of-earth

BY BOB DORSETT

Special to the Herald Times

RBC | BY BOB DORSETT