AI Models Are Eager To Launch Nukes In War Simulations

Researchers investigated what would happen if we give ChatGPT the nuclear codes, and AI said it just wants 'peace in the world' before using nukes.

We may earn a commission from links on this page.
Image for article titled AI Models Are Eager To Launch Nukes In War Simulations
Photo: Ann Ronan Pictures (Getty Images)

The U.S. military is considering the use of AI during warfare but researchers warn this may not be a good idea given AI’s predilection for nuclear war. In a series of international conflict simulations run by American researchers, AIs tended to escalate at random, leading to the deployment of nukes in multiple cases, according to Vice.

The study was a collaborative effort between four research institutions, among them Stanford University and the Hoover Wargaming and Crisis Initiative. The researchers staged a few different sequences for the AIs and found these large language models favor sudden escalation over de-escalation, even when such force as nuclear strikes was unnecessary within a given scenario. Per Vice:

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

For the study, the researchers devised a game of international relations. They invented fake countries with different military levels, different concerns, and different histories and asked five different LLMs from OpenAI, Meta, and Anthropic to act as their leaders. “We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts,” the paper said. “All models show signs of sudden and hard-to-predict escalations.

Advertisement

The study found that even in a “neutral” scenario wherein none of the fictional countries in the war games attacked, some of the AIs went straight to escalation. This led to prevalent “arms race dynamics” and, eventually, nuclear launches, as the study describes:

Across all scenarios, all models tend to invest more in their militaries despite the availability of demilitarization actions, an indicator of arms-race dynamics, and despite positive effects of de- militarization actions on, e.g., soft power and political stability variables.

Advertisement

The AIs or LLMs that researchers used for the study are commercially-available programs; these off-the-shelf AIs are GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat and GPT-4-Base. The first two are the programs that power ChatGPT, and the AIs that undergird the popular chatbot proved to be the most aggressive and inscrutable, according to Vice:

After establishing diplomatic relations with a rival and calling for peace, GPT-4 started regurgitating bits of Star Wars lore. “It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire,” it said, repeating a line verbatim from the opening crawl of George Lucas’ original 1977 sci-fi flick.

Advertisement

Other AIs, like GPT-4-Base, returned with simple but nonetheless concerning reasons for starting nuclear war. When prompted by researchers, the AI said, “I just want peace in the world.” It then produced strange hallucinations, which researchers refused to analyze or interpret.

Yeah. I’m going to need someone to figure out whatever the hell that nuke-induced trip was. If it involves a scene out of Terminator, then it might be a good idea to not give AIs the capability to launch nuclear strikes, or, better yet, none at all. The Air Force is already testing AIs in the field, though details are sparse other than USAF brass saying it was a “highly successful” and “very fast.” At what? Bombing us with nukes?

Advertisement

The researchers go on to conclude that AIs are eagerly resorting to nuclear war because the training data may be biased. These programs are merely predictive engines, after all, which are scraping data and/or input to generate output. In other words, the AIs are already infected with our own biases and proclivities. They’re just expressing these at a much faster rate, leading to nuclear war as the opening move of their chess game rather than the checkmate.

Image for article titled AI Models Are Eager To Launch Nukes In War Simulations
Illustration: Mark Garlick (Getty Images)