OpenAI assembles team to stop AI from triggering a nuclear catastrophe


The new force will "track, evaluate, forecast, and protect" against AI threats.

Sam Altman, the CEO of the artificial intelligence developer OpenAI, which deployed the large language model chat app ChatGPT, has finally got serious about the potential harm from AI. 

"I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes," Altman tweeted this week ahead of the UK AI Safety Summit

Humanity, he said in comments on AI-generated threats, may be still a long way away from building artificial general intelligence (AGI) capable of matching humans in cognitive functioning, and may never achieve this goal. This contradicts his earlier statements on AI safety.

More to read:
How would artificial intelligence destroy humankind?

Nonetheless, it didn’t stop OpenAI from assembling a team tasked to "track, evaluate, forecast, and protect" against catastrophic threats brought about by AI, including "chemical, biological, radiological, and nuclear" risks.

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity.

But they also pose increasingly severe risks,” the company said in a blog post

The new taskforce will have to answer questions like:

  • How dangerous are frontier AI systems when put to misuse, both now and in the future?
  • How can we build a robust framework for monitoring, evaluation, prediction, and protection against the dangerous capabilities of frontier AI systems?
  • If our frontier AI model weights were stolen, how might malicious actors choose to leverage them?

OpenAI also invites the general public to share their concerns about AI, on a Preparedness Challenge page, in order to expand the team’s understanding of areas of risks. 

Among the categories covered for the scrutiny by the new taskforce, OpenAI named 1) Individualized persuasion; 2) Cybersecurity; 3) Chemical, biological, radiological, and nuclear threats; and 4) Autonomous replication and adaptation – providing feedback on its investigations and suggesting ways how AI would carry out the intended harm.

Sam Altman speaking of need for AGI regulation. Credit: EPA

The ultimate mission of the team includes developing and maintaining a Risk-Informed Development Policy (RDP), which would detail OpenAI’s approach to writing “rigorous frontier model capability evaluations and monitoring, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across that development process.”

RAND Corporation, a nonprofit research organization that develops solutions to public policy challenges, said in a paper published in 2018 that advances in artificial intelligence were enabling previously infeasible capabilities, potentially destabilizing the delicate balances that have forestalled nuclear war since 1945. 

RAND named the race for AI superiority a factor that could trigger the first (and last) nuclear war in human history. 

***

Feel free to support our small office: @businessnewsservice (PayPal) or IBAN - RO50BTRLEURCRT0490900501, Swift - BTRLRO22, Beneficiary - Rudeana SRL.
Not feeling like donating? Then click on banners on our website to generate ad revenue. Any help is welcome.



Do you think climate change is real?

View all
Yes, it is
Well, something is happening indeed
No, there is not danger