A.I. pioneers call for global oversight to mitigate catastrophic risks. The Berggruen Institute's Dawn Nakagawa joins AI leaders, @AIS_Dialogues & @Safe_AI_Forum in advocating for international collaboration in A.I. safety. Learn more here: https://t.co/ou0fhlIm65
— Berggruen Institute (@berggruenInst) September 18, 2024
The world’s leading AI scientists are urging governments to work together to regulate AI technology before it spirals out of control. Three Turing Award winners who spearheaded AI research joined a dozen top scientists worldwide in calling for better measures to advance AI safely. These scientists argue that rapid AI advancements could bring grave consequences if mishandled or misused.
We don't have to agree on the probability of catastrophic AI events to agree that we should have some global protocols in place in the event of international AI incidents that require coordinated responses. More here: https://t.co/FrKyLXJLYf
— Gillian Hadfield (@ghadfield) September 16, 2024
Loss of human control or malicious use of AI systems could lead to catastrophic outcomes for all of humanity,” they wrote in an open letter. They warned that these catastrophic outcomes could occur any day. To address the risks of malicious AI use, scientists recommended several steps:
Governments need to collaborate on AI safety precautions.
I wrote a piece detailing my research on the interim International Scientific Report on the Safety of Advanced AI.
I’ve started refining these sections for the final report, which will be published ahead of the AI Action Summit in France next year.https://t.co/wD1wuvaKlx
— Dr. Chinasa T. Okolo (@ChinasaTOkolo) September 16, 2024
This could involve creating specific AI authorities to handle AI incidents and risks within their borders.
Are AIcorp execs saying that their AI is so powerful that it might destroy the world, in order to raise venture capital? Of course! So ignore them one way or the other, and listen to the researchers who resigned burning their bridges and the scientists who never worked there,…
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) September 15, 2024
These authorities would ideally work together internationally, and a new global body should be established to prevent the development of risky AI models. This body would ensure states adopt essential safety measures, including model registration, disclosure, and safeguards.
Developers should commit to the safety of their models, ensuring they do not cross critical ethical lines. This includes avoiding the creation of AI that can autonomously replicate, improve, seek power, deceive creators, or enable weapons of mass destruction or cyberattacks, as outlined by top scientists in a meeting in Beijing last year. Another proposal is establishing global AI safety and verification funds, supported by governments, philanthropists, and corporations.
These funds would sponsor independent research to develop better technological checks on AI. Among the experts urging action were Turing award winners, including Yoshua Bengio, one of the most cited computer scientists. The letter also praised existing international cooperation on AI, such as a meeting in Geneva between U.S. and Chinese leaders to discuss AI risks.
However, the scientists emphasized the need for more collaboration.
Urgent need for AI governance
They argue that AI development should come with ethical norms for engineers, similar to those for doctors or lawyers.
Governments should treat AI as a global public good. “Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time,” the letter read. A group of artificial intelligence scientists is urging nations to create a global oversight system to prevent potential “catastrophic outcomes” if humans lose control of AI.
In an open letter released on Sept. 16, the scientists shared their concern that the technology they helped develop could lead to catastrophic outcomes for humanity if not properly controlled. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” read the letter.
“Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”
The scientists agreed that nations need to develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions and stressed the need for a “global contingency plan.” In the longer term, they advocate for an international governance regime to prevent the development of models that could pose global catastrophic risks. The statement builds upon findings from the International Dialogue on AI Safety in Venice in early September, the third meeting of its kind organized by the nonprofit US research group Safe AI Forum. Johns Hopkins University Professor Gillian Hadfield commented, “If we had some sort of catastrophe six months from now, if we detect there are models that are starting to autonomously self-improve, who are you going to call?”
The scientists stated that AI safety is recognized as a global public good, requiring international cooperation and governance.
They proposed three key processes: emergency preparedness agreements and institutions, a safety assurance framework, and independent global AI safety and verification research. The statement had more than 30 signatories from the United States, Canada, China, Britain, Singapore, and other countries. The group comprised experts from leading AI research institutions and universities, including several Turing Award winners, the equivalent of the Nobel Prize for computing.
The scientists said the dialogue was necessary due to shrinking scientific exchange between superpowers and growing distrust between the US and China, adding to the difficulty of achieving consensus on AI threats. In early September, the US, EU, and UK signed the world’s first legally binding international AI treaty, prioritizing human rights and accountability in AI regulation. However, tech corporations and executives have expressed concerns that such regulations could stifle innovation, especially in the European Union.
Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.












