IDAIS-Venice, 2024

VENICE, ITALY September 5th-8th

Western and Chinese scientists: AI safety a “global public good”, global cooperation urgently needed.

Leading global artificial intelligence (AI) scientists gathered in Venice in September where they issued a call urging governments and researchers to collaborate to address AI risks. Computer scientists including Turing Award winners Yoshua Bengio and Andrew Yao, as well as UC Berkeley professor Stuart Russell, OBE and Zhang Ya-Qin, Chair Professor at Tsinghua University, convened for the third in a series of International Dialogues on AI Safety (IDAIS), hosted by the Safe AI Forum (SAIF) in collaboration with the Berggruen Institute.

The event took place over three days at the Casa dei Tre Oci in Venice and focused on safety efforts around so-called artificial general intelligence. The first day involved a series of discussions centered around the nature of AI risks and the variety of strategies required to counter them. Session topics included early warning thresholds, AI Safety Institutes, verification and international governance mechanisms.

These discussions became the basis of a consensus statement signed by the scientists, centered around the idea that AI safety is a “global public good”, suggesting that states carve out AI safety as a cooperative area of academic and technical activity. The statement calls for three areas of policy and research. First, they advocate for “Emergency Preparedness Agreements and Institutions”, a set of global authorities and agreements which could coordinate on AI risk. Then, they suggest developing “Safety Assurance Frameworks”, a more comprehensive set of safety guarantees for advanced AI systems. Finally, they advocate for more AI Safety funding and research into verification systems to ensure that safety claims made by developers or states are trustworthy. The full statement can be read below.

On the second day, scientists were joined by a group of policymakers, former President of Ireland Mary Robinson and other experts. The scientists emphasized the urgency of employing these proposals given the rapid pace of AI development. The statement was presented directly to the policymakers, and the group strategized about how the international community may work together to accomplish these goals.

MPR61870 (3)

Statement

ENGLISH 中文声明

The global nature of AI risks makes it necessary to recognize AI safety as a global public good"

由于人工智能带来的风险具有全球性,我们必须将人工智能安全视为全球公共产品"

IDAIS-Beijing, 2024

BEIJING, CHINA March 10th-11th

International scientists meet in Beijing to discuss extreme AI risks, recommend red lines for AI development and international cooperation.

Leading global AI scientists convened in Beijing for the second International Dialogue on AI Safety (IDAIS-Beijing), hosted by the Safe AI Forum in collaboration with the Beijing Academy of AI (BAAI). During the event, computer scientists including Turing Award winners Yoshua Bengio, Andrew Yao, and Geoffrey Hinton and the Founding & current BAAI Chairmans HongJiang Zhang and Huang Tiejun worked with governance experts such as Tsinghua professor Xue Lan and University of Toronto professor Gillian Hadfield to chart a path forward on international AI safety.

The event took place over two days at the Aman Summer Palace in Beijing and focused on safely navigating the development of Artificial General Intelligence (AGI) systems. The first day involved technical and governance discussions of AI risk, where scientists shared research agendas in AI safety but also potential regulatory regimes. The discussion culminated in a consensus statement recommending a set of red lines for AI development to prevent catastrophic and existential risks from AI. In the consensus statement, the scientists advocate for prohibiting development of AI systems that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks. Additionally, the statement laid out a series of measures to be taken to ensure those lines are never crossed. The full statement can be read below.

On the second day, the scientists met with senior Chinese officials and CEOs. The scientists presented the red lines proposal and discussed existential risks from artificial intelligence, and officials expressed enthusiasm about the consensus statement. Discussions focused on the necessity of international cooperation on this issue.

IDAIS-Bejing-1

Statement

ENGLISH 中文声明

In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.”

在过去冷战最激烈的时候,国际科学界与政府间的合作帮助避免了热核灾难。面对前所未有的技术,人类需要再次合作以避免其可能带来的灾难的发生.

IDAIS-Oxford, 2023

DITCHLEY PARK, UK October 31st

In the Inaugural IDAIS, International Scientists Call for Global Action on AI Safety.

Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

Prominent scientists gathered from the USA, the PRC, the UK, Europe, and Canada for the first International Dialogues on AI Safety. The meeting was convened by Turing Award winners Yoshua Bengio and Andrew Yao, UC Berkeley professor Stuart Russell, OBE, and founding Dean of the Tsinghua Institute for AI Industry Research Ya-Qin Zhang. The event took place at Ditchley Park near Oxford. Attendees worked to build a shared understanding of risks from advanced AI systems, inform intergovernmental processes, and lay the foundations for further cooperation to prevent worst-case outcomes from AI development.

Statement

ENGLISH 中文声明

Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.”

在人工智能安全研究和治理方面协调一致的全球行动对于防止不受控制的前沿人工智能发展给人类带来不可接受的风险至关重要。”

team_image mobile_image

About

The International Dialogues on AI Safety (IDAIS) bring together leading scientists from around the world to collaborate on mitigating risks from AI. The inaugural IDAIS event in October 2023 was convened by Turing Award winners Yoshua Bengio and Andrew Yao, UC Berkeley professor Stuart Russell, OBE, and founding Dean of the Tsinghua Institute for AI Industry Research Ya-Qin Zhang. IDAIS is supported by the Safe AI Forum, an organization co-founded by Fynn Heide and Conor McGurk and fiscally sponsored by FAR AI. The Safe AI Forum does not receive funding from any corporate AI labs.

Please use the form below to get in touch.