Date: September 17, 2024
Renowned and influential AI scientists have shared an open letter to global governments for protection against catastrophic risks to humans.
The international dialogue on AI safety in Venice sparked a new collective concern from some of the most renowned AI pioneers in the world. A group of influential AI scientists have shared an open letter to all global governments, urging them to create a global oversight and control system before AI advancement goes out of human control.
The conclusion of the international dialogue in Venice focused on building AI for the greater good of humanity. The group of scientists published a written open letter on September 16 around collective steps nations must take to prevent catastrophic disasters led by AI.
“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the statement read, “Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”
The open letter aimed at building three aspects of AI monitoring and oversight: emergency preparedness agreements and institutions, a safety assurance framework, and independent global AI safety and verification research.
Over 30 signatories from the United States, Canada, China, Britain, Singapore, and other countries joined hands to form a global contingency plan with immediate actions in case of emergencies. AI researchers from top institutions and universities revealed that the scientific exchange of AI advancements between superpowers was shrinking, especially because of the growing distrust between the US and China.
“In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks,” said the statement.
This was the third dialogue meeting on AI safety conducted by the nonprofit US research group Safe AI Forum. In early September, the US, UK, and Europe signed the world’s first legally binding international AI treaty that prioritizes human safety, rights, and wellbeing over any AI innovation. It also formed concrete guidelines that place accountability in AI regulation on its makers. Tech corporations and leading AI giants have expressed that over-regulation will weaken innovation efforts, especially in the EU region. However, the EU region and other nations have strongly supported AI tools for productivity, education, and other pro-human aspects.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.