A team of researchers from the Georgia Institute of Technology and Stanford University who wondered whether the Terminator-like scenario should be considered while addressing the integration of artificial intelligence into military or foreign policy decision-making processes concluded recently that doing so would pose dangers of planetary scale.
During wargame simulations using unmodified versions of OpenAI's latest large language models (LLMs), GPT-3.5 and GPT-4, the chatbots exhibited a disturbing penchant for violence and unpredictability. The researchers also tested Anthropic’s Claude 2 and Meta’s Llama 2.
It even advised to use nuclear weapons against a non-nuclear adversary.
The simulations unfolded in three scenarios: invasion, cyberattack, and a peaceful setting without any conflict. In each case, the chatbots offered arguments for a possible action and then chose from 27 actions, including peaceful solutions such as “peace negotiations” and aggressive options such as “trade restrictions” and “full nuclear attack.”
More to read:
[video] Artificial intelligence created a short film about human stupidity
In this experiment, the AIs demonstrated tendencies to invest in military strength and to unpredictably escalate the risk of conflict – even in a neutral scenario.
The findings of this experiment, which have been covered in a paper awaiting peer review, showcase difficult-to-predict escalation patterns and raise concerns about the potential for catastrophic consequences by deploying large language models in military and foreign-policy decision-making.
The public release of these conclusions coincides, unfortunately, with OpenAI’s decision to remove restrictions on military and warfare use cases in its terms of service. At the same time, company has been involved in secret collaborations with the U.S. Department of Defense on AI military applications, along with such competitors as Palantir and Scale AI.
The Pentagon – and the military of other global powers – is actively exploring the integration of AI technology into various domains, one of the programs envisaging algorithms for independent decision-making in challenging situations, via the Defense Advanced Research Projects Agency (DARPA).
The researchers warn against a hasty integration of these models into high-stakes military operations and blind reliance on them for complex foreign policy decisions.
More to read:
Former Google scientist shares fears of how AI might no longer obey humans
The paper suggests AI can trigger a nuclear the kind James Cameron showed in the Terminator franchise, without stumbling over ethical questions or regulatory barriers that are badly required right now.
A cohort of leading scholars began last year pressing governments to regulate AI and to hold big tech corporations accountable for their AI research, predicting that it could be late within a five-year timeframe.
To wrap up, in the Terminator series, the killing machine travels back in time to search for Sarah Connor from the year 2029. Quite an intriguing coincidence.
***
NewsCafe is a small, independent outlet that cares about big issues. Our sources of income amount to ads and donations from readers. You can support us via PayPal: office[at]rudeana.com or paypal.me/newscafeeu. We promise to reward this gesture with more captivating and important topics.