Three tech giants - Google, Microsoft and OpenAI - and a startup called Anthropic have formed a Frontier Model Forum to draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.
The four of the most advanced companies in the AI industry identified four core objectives in a joint press release (Google; Microsoft; OpenAI):
1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
“Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others. To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility,” the signatory parties vowed.
The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models:
Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.
The founding companies will also establish key institutional arrangements including a charter, governance and funding with a working group and executive board to lead these efforts. We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate.
The alliance will be open for new members who adhere to its values and goals.
More to read:
Humans may not survive Artificial Intelligence, says Israeli scientist
The Frontier Model Forum is not the only initiative to promote safe and responsible AI. Last June, PepsiCo announced a partnership with the Stanford Institute for Human-Centered AI.
The Massachusetts Institute of Technology’s Schwarzman College of Computing has established the AI Policy Forum as a global effort to formulate concrete guidance for governments and companies to address the challenge of AI such as privacy, fairness, bias, transparency and accountability.
Carnegie Mellon University formed the Safe AI Lab with the purpose of developing reliable, explainable, verifiable, and good-for-all artificial intelligent learning methods for consequential applications.
AI is the biggest threat to humanity nowadays, according to Israeli scholar Yuval Harari, who described the launch of the large language model ChatGPT as “the opening of the Pandora box.” He and hundreds of other scientists have called for an immediate halt of big AI projects in order to give governments time to regulate the rapidly growing industry.