Major players in the AI industry, including Microsoft and Google, have come together to launch a new forum dedicated to the safe and responsible development of artificial intelligence (AI) projects. The forum, called the Frontier Model Forum, aims to promote and develop a standard for evaluating AI safety while helping governments, corporations, policy-makers, and the public understand the risks, limits, and possibilities of AI technology.
The founding members of the Frontier Model Forum include Anthropic, Google, Microsoft, and OpenAI. The forum intends to develop best practices for addressing society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The forum is open to anyone involved in the development of frontier models, which are aimed at achieving breakthroughs in machine-learning technology, and who are committed to the safety of their projects. It plans to establish working groups and partnerships with governments, non-governmental organizations (NGOs), and academic institutions.
According to Microsoft, companies that create AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. Microsoft President Brad Smith stressed the importance of maintaining human control over AI in a statement.
The launch of the Frontier Model Forum comes in response to growing concerns about the risks associated with AI. Thought leaders in the AI industry have been calling for meaningful regulation to mitigate these risks. The CEOs of all the participants in the forum, except for Microsoft, signed a statement in May urging governments and global bodies to prioritize the mitigation of the risk of extinction from AI, comparing it to the prevention of nuclear war.
The urgency of addressing the risks of AI was highlighted by the testimony of Anthropic CEO Dario Amodei before the US Senate. Amodei warned that AI is much closer to surpassing human intelligence than most people believe and called for strict regulations to prevent nightmare scenarios, such as the use of AI to produce biological weapons. This echoes the concerns expressed by OpenAI CEO Sam Altman, who testified before the US Congress earlier this year, warning that AI could go terribly wrong.
Recognizing the need for oversight, the White House established an AI task force under Vice President Kamala Harris in May. Last week, it reached an agreement with the participants of the Frontier Model Forum, as well as Meta and Inflection AI, to allow external audits for security flaws, privacy risks, discrimination potential, and other issues before launching AI products on the market. They also agreed to report all vulnerabilities to the relevant authorities.
By launching the Frontier Model Forum, these major players in the AI industry are taking a proactive approach to address the challenges and risks associated with AI development. The forum’s focus on promoting safety and responsible development reflects the industry’s recognition of the importance of addressing these issues to ensure the beneficial and ethical use of AI technology. Through collaboration and cooperation, the forum aims to shape the future of AI development and harness its potential for positive societal impact.
Source link