devxlogo

Ai experts warn of catastrophic risks

Catastrophic Risks
Catastrophic Risks

The world’s leading AI scientists are urging governments to work together to regulate AI technology before it spirals out of control. Three Turing Award winners who spearheaded AI research joined a dozen top scientists worldwide in calling for better measures to advance AI safely. These scientists argue that rapid AI advancements could bring grave consequences if mishandled or misused.

Loss of human control or malicious use of AI systems could lead to catastrophic outcomes for all of humanity,” they wrote in an open letter. They warned that these catastrophic outcomes could occur any day. To address the risks of malicious AI use, scientists recommended several steps:

Governments need to collaborate on AI safety precautions.

This could involve creating specific AI authorities to handle AI incidents and risks within their borders.

These authorities would ideally work together internationally, and a new global body should be established to prevent the development of risky AI models. This body would ensure states adopt essential safety measures, including model registration, disclosure, and safeguards.

See also  AI Set To Reshape Hollywood Production

Developers should commit to the safety of their models, ensuring they do not cross critical ethical lines. This includes avoiding the creation of AI that can autonomously replicate, improve, seek power, deceive creators, or enable weapons of mass destruction or cyberattacks, as outlined by top scientists in a meeting in Beijing last year. Another proposal is establishing global AI safety and verification funds, supported by governments, philanthropists, and corporations.

These funds would sponsor independent research to develop better technological checks on AI. Among the experts urging action were Turing award winners, including Yoshua Bengio, one of the most cited computer scientists. The letter also praised existing international cooperation on AI, such as a meeting in Geneva between U.S. and Chinese leaders to discuss AI risks.

However, the scientists emphasized the need for more collaboration.

Urgent need for AI governance

They argue that AI development should come with ethical norms for engineers, similar to those for doctors or lawyers.

Governments should treat AI as a global public good. “Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time,” the letter read. A group of artificial intelligence scientists is urging nations to create a global oversight system to prevent potential “catastrophic outcomes” if humans lose control of AI.

In an open letter released on Sept. 16, the scientists shared their concern that the technology they helped develop could lead to catastrophic outcomes for humanity if not properly controlled. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” read the letter.

See also  Children’s Commissioner Urges Ban on Social Ads

“Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”

The scientists agreed that nations need to develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions and stressed the need for a “global contingency plan.” In the longer term, they advocate for an international governance regime to prevent the development of models that could pose global catastrophic risks. The statement builds upon findings from the International Dialogue on AI Safety in Venice in early September, the third meeting of its kind organized by the nonprofit US research group Safe AI Forum. Johns Hopkins University Professor Gillian Hadfield commented, “If we had some sort of catastrophe six months from now, if we detect there are models that are starting to autonomously self-improve, who are you going to call?”

The scientists stated that AI safety is recognized as a global public good, requiring international cooperation and governance.

They proposed three key processes: emergency preparedness agreements and institutions, a safety assurance framework, and independent global AI safety and verification research. The statement had more than 30 signatories from the United States, Canada, China, Britain, Singapore, and other countries. The group comprised experts from leading AI research institutions and universities, including several Turing Award winners, the equivalent of the Nobel Prize for computing.

The scientists said the dialogue was necessary due to shrinking scientific exchange between superpowers and growing distrust between the US and China, adding to the difficulty of achieving consensus on AI threats. In early September, the US, EU, and UK signed the world’s first legally binding international AI treaty, prioritizing human rights and accountability in AI regulation. However, tech corporations and executives have expressed concerns that such regulations could stifle innovation, especially in the European Union.

See also  BBC Probes OnlyFans Ghost Chatting

Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.