MIT physicist: AI is getting out of control and may have other priorities than humans


Max Tegmark says tech companies are aware of AI risks, but feel commercial pressure to continue innovating no matter what.

A physicist and AI researcher at the Massachusetts Institute of Technology (MIT) and a co-founder of the Future of Life Institute, which studies technology, Max Tegmark believed for a while in the promise of artificial intelligence, envisioning a near future in which superintelligent computers fight climate change, find cures for cancer and Alzheimer’s disease, and solve the planet’s most outstanding problems. 

His optimism about AI has succumbed to pessimism during the past two years, and lately the scientist has been ringing the alarm bells.

“I had really hoped that humanity would be more up to the task of managing this,” he said in a recent interview for the Wall Street Journal, pointing to the fact that tech companies – while realizing that there are huge AI-associated risks for the very existence of humanity – nonetheless continue racing to build systems they don’t even understand and don’t bother to contain with due regulations. 

The Artificial General Intelligence (AGI), a diligent learner of its own that will surpass our power of thinking and acting, will almost certainly develop priorities that don’t align to ours.

It will be better than humans in all aspects, but outside their control – and this will happen within a couple of years, according to Tegmark.

“Ever since the term AI was coined in the 1950s, it was obvious to leaders in the field that if we ever succeeded in getting close to human-level AI, it was pretty likely we would face enormous risks, like extinction, but people kept thinking this was really far away. […] What’s new is not the idea. It’s that, holy guacamole, it’s happening, and we didn’t have as much time as we thought to do the safety part,” Tegmark stated.

His alarmist narratives reignited in connection with the lack of progress by countries – and especially the United States government – to regulate the AI and reach agreement among tech giants on ethics and safety norms.

Tech leaders will often admit privately that they are concerned about AI risks but feel real commercial pressure to innovate faster than their rivals, WSJ quoted Tegmark as saying. This idea is shared by governments, too.

In March, Tegmark’s Future of Life Institute published an open letter calling for a pause in training the most powerful AI systems until researchers adopt a shared set of standards. Its 33,000+ signatories include SpaceX founder and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and Yoshua Bengio, a computer scientist who won the Turing Prize for his pioneering work in AI.

In May, CEOs from OpenAI, Google DeepMind, Anthropic and other AI companies endorsed a statement from the Center for AI Safety, a nonprofit organization, reading that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.



Is citizenship withdrawal a justified measure against unloyal citizens?

View all
YES
NO