OpenAI announced on 30 January that it has struck a deal with the United States government to use its AI for nuclear weapon security. Does this sound familiar? A powerful AI system given control over nuclear security, what could possibly go wrong?
The announcement has sparked both intrigue and concern around the world. This isn’t just the plot of popular movie Terminator and follow-up doomsday series — it’s now a reality, thanks to OpenAI’s latest deal with the U.S. government.
More to read:
OpenAI’s latest chatbot recommends nuclear strike in simulated conflict
OpenAI has revealed that the U.S. National Laboratories will utilize its AI models, including via ChatGPT-Gov platform, as part of a "comprehensive program in nuclear security." Up to 15,000 scientists at these institutions will gain access to the company’s latest o1 series of AI models (we hope they are well intended and mentally ok).
CEO Sam Altman framed the partnership as a step toward "reducing the risk of nuclear war and securing nuclear materials and weapons worldwide." However, this optimistic vision is overshadowed by the company’s track record of unreliable AI systems.
More to read:
As OpenAI departs from its initial mission, AI scientist warns it will become an Orwellian project
For those familiar with the 1984 sci-fi classic movie, the parallels are unsettling. In the film, a defense network computer system gains advanced intelligence, perceives humanity as a threat, and decides to eradicate it in a matter of a microsecond.
While this may sound like the stuff of Hollywood, the real-world implications of OpenAI’s new partnership raise alarming questions about the potential consequences of integrating flawed AI systems into critical national security infrastructure, given its huge record of AI hallucinations, falsehoods, security breaches, data leaks.
More to read:
Humans may not survive Artificial Intelligence, says Israeli scientist
Yet, the government is now trusting these same models to assist in nuclear security.
The Terminator analogy may seem hyperbolic, but it underscores a legitimate concern: as AI systems become more integrated into critical infrastructure, the risk of them making autonomous decisions with far-reaching consequences grows. If an AI system were to misinterpret its role or perceive humanity as a threat, the results could be catastrophic.
The question isn’t just whether OpenAI’s models will work. It’s whether they should be trusted at all in nuclear security. Because if history — and science fiction — has taught us anything, it’s that trusting AI with nuclear decisions rarely ends well.
More to read:
Two studies reveal that AI systems are learning to lie and deceive
A team of researchers from the Georgia Institute of Technology and Stanford University who wondered whether the Terminator-like scenario should be considered while addressing the integration of artificial intelligence into military or foreign policy decision-making processes concluded in early 2024 that doing so would pose dangers of planetary scale.
The scientists discovered that AI “prefers violence” and has no issue with using nukes. In the experiment, the AIs - either OpenAI's ChatGPT, Meta's Llama 2, or Anthropic's Claude 2 - demonstrated tendencies to invest in military strength and to unpredictably escalate the risk of conflict, even in a neutral scenario.
Other studies demonstrated that AI systems are rapidly learning to lie and deceive their creators.
***
NewsCafe is an independent outlet. Our sources of income amount to ads and subscriptions. You can support us via PayPal: office[at]rudeana.com or https://paypal.me/newscafeeu, or https://buymeacoffee.com/newscafe - any amount is welcome. You may also want to like or share our story, that would help us too.