In a stern warning, the chief of Google DeepMind, Demis Hassabis, has drawn a parallel between the risks posed by Artificial Intelligence (AI) and the climate crisis. He emphasizes that the world cannot afford to delay its response to the potential dangers of AI, akin to the proactive measures required to combat climate change. The remarks come as the UK government prepares for an AI safety summit, spotlighting the urgency for regulatory oversight in the AI sector.
- AI risks likened to climate crisis by Google DeepMind’s Chief.
- Urgent global response and regulatory oversight emphasized.
- Reference to the Intergovernmental Panel on Climate Change (IPCC) as a potential model for AI oversight.
- Mention of existential threats like bioweapons and super-intelligent systems.
- Suggestion of international structures akin to CERN and IAEA for AI safety and auditing.
Hassabis voiced his concerns in light of the forthcoming summit on AI safety hosted by the UK government. He suggested that oversight of the AI industry could commence with a body resembling the Intergovernmental Panel on Climate Change (IPCC). The comparison draws attention to the structured and international approach taken by the IPCC in addressing climate change issues, hinting at a similar pathway for managing AI risks.
Existential Threats and Oversight:
The Google DeepMind chief elaborated on the perilous facets of AI, including its potential to aid in the creation of bioweapons and the existential threat posed by super-intelligent systems. He accentuated the necessity for a prompt global response to these challenges, to avoid the repercussions experienced due to delayed action on climate change. Hassabis acknowledged AI as a significant and beneficial technology but highlighted the dire need for a regime of oversight.
A Call for International Cooperation:
Drawing inspiration from international structures like the IPCC, Hassabis envisaged starting with a scientific and research agreement with reports, building up to a level where there could be a CERN equivalent for AI safety that operates internationally. He further envisioned an equivalent of the International Atomic Energy Agency (IAEA) for auditing AI safety, emphasizing a collaborative international effort to address AI risks.
The call to action by Google DeepMind’s chief underscores the gravity of AI risks, likening them to the climate crisis. Hassabis advocates for an immediate global response and the establishment of international oversight bodies to prevent potentially catastrophic outcomes. The remarks set a serious tone as the UK government gears up for an AI safety summit, signifying a pivotal moment in the discourse surrounding AI safety and regulation.