Bringing together senior scientists from around the world to mitigate extreme risks from AI

Past Dialogues

IDAIS-Shanghai

Hinton, Yao, and global AI scientists convene in Shanghai to address AI misalignment risk, urge international cooperation.

2025
Statement

IDAIS-Venice

Western and Chinese scientists: AI safety a “global public good”, global cooperation urgently needed

2024
Statement

IDAIS-Beijing

International scientists meet in Beijing to discuss extreme AI risks, recommend red lines for AI development and international cooperation.

2024
Statement

IDAIS-Oxford

In the Inaugural IDAIS, International Scientists Call for Global Action on AI Safety.

2023
Statement

IDAIS-Shanghai, 2025

SHANGHAI, CHINA July 22nd-25th

Hinton, Yao, and Global AI Scientists Convene in Shanghai to Address AI Misalignment Risk, Urge International Cooperation.

Leading Chinese and international artificial intelligence (AI) scientists, including Turing Award winner Geoffrey Hinton, gathered in Shanghai to address the critical risks of AI deception. The dialogue, the latest in the series of International Dialogues on AI Safety (IDAIS), saw top researchers call for urgent international cooperation to ensure advanced AI systems remain controllable and aligned with human intentions and values.

The event was hosted by the Safe AI Forum (SAIF) in partnership with the Shanghai Qi Zhi Institute and the Shanghai AI Laboratory. Alongside Dr. Hinton, prominent attendees included Turing Award winner Andrew Yao, UC Berkeley professor Stuart Russell, and Professor Zhou Bowen, Director of the Shanghai AI Lab. They were joined by distinguished governance experts including Madame Fu Ying, Dean Xue Lan of Tsinghua University, Johns Hopkins Professor Gillian Hadfield, and Oxford Professor Robert Trager, who provided expertise on international cooperation and governance frameworks.

Discussions centered on the significant dangers posed by AI deception, and the possibility for AI systems to escape human control. The scientists explored technical and governance strategies to prevent, and correct such behavior in advanced AI. This dialogue culminated in a new consensus statement which discusses the emerging empirical evidence of deception and strategies to address it.

The statement focuses on an increasing body of evidence that AI systems today already demonstrate the capability and propensity to undermine their creators’ safety and control efforts presenting several examples and case studies of this behavior in AI systems.. It also calls for three actions: First, it recommends that we require safety assurance from developers, mandating rigorous safety evaluations, red-teaming, and both pre- and post-deployment monitoring for advanced AI models. Second, it calls on the international community to establish "Global Verifiable Red Lines" for actions AI must never take, supported by an international body to coordinate on standards and verification. Finally, the statement advocates for the development of safe development approaches, including "Safe-by-Design" AI systems, where systems are designed to be safe from their inception rather than adding it on later. The full statement can be read below.

Following their technical discussions, the scientists met with senior Chinese officials and senior executives from Chinese technology companies. In these meetings, the experts presented the consensus statement, emphasizing the necessity for a globally coordinated approach to managing deception risks and ensuring that the development of powerful AI is a safe and shared endeavor.

full

Statement

ENGLISH 中文声明

IDAIS-Shanghai saw top researchers call for urgent international cooperation to ensure advanced AI systems remain controllable and aligned with human intentions and values.

——确保高级人工智能系统的对齐与人类控制,以保障人类福祉

Signatories

Geoffrey Hinton

Professor Emeritus, Department of Computer Science
University of Toronto
Turing Award Winner
Nobel Prize Winner

Andrew Yao 姚期智

Turing Award Winner

Dean

Shanghai Qi Zhi Institute

Dean, Institute for Interdisciplinary Information Sciences and College of AI
Tsinghua University

Yoshua Bengio

Professor
Université de Montréal
Founder and Scientific Advisor
Mila – Quebec AI Institute
Chair
International Scientific Report on the Safety of Advanced AI
Turing Award Winner

Ya-Qin Zhang 张亚勤

Chair Professor of AI Science
Tsinghua University
Dean of Institute for AI Industry Research (AIR)
Tsinghua University
Former President
Baidu

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Fu Ying 傅莹

Xue Lan 薛澜

Dean, Schwarzman College
Tsinghua University
Director, Institute for AI International Governance (I-AIIG)
Tsinghua University

Gillian K. Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University

Robert Trager

Director, Oxford Martin AI Governance Initiative
University of Oxford

Sam R. Bowman

Member of Technical Staff,
Anthropic, PBC
Associate Professor of Data Science, Computer Science and Linguistics
New York University

Max Tegmark

Professor, Center for Brains, Minds and Machines (CBMM)
Massachusetts Institute of Technology (MIT)
President and Co-founder
Future of Life Institute

Dan Baer

Dan Hendrycks

Executive Director
Center for AI Safety

Advisor
xAI

Advisor
Scale AI

Xu Wei 徐葳

Principal Investigator
Shanghai Qi Zhi Institute

Professor and Vice Dean of the Institute for Interdisciplinary Information Sciences
Tsinghua University

Zhu Yibo 朱亦博

Co-Founder
Stepfun

Wei Kai 魏凯

Director
Artificial Intelligence Institute at the China Academy of Information and Communications Technology (CAICT)

Chair
General Working Group of Artificial Intelligence Industry Alliance (AIIA)

Benjamin Prud’homme

Seán Ó hÉigeartaigh

Director of the AI: Futures and Responsibility Programme
Centre for the Future of Intelligence, University of Cambridge

Maria Eitel

Gao Qiqi 高奇琦

School of International Relations and Public Affairs Professor
Fudan University

Adam Gleave

Founder and CEO
FAR.AI

Tian Tian 田天

CEO
RealAI

He Tianxing 贺天行

Principal Investigator
Shanghai Qi Zhi Institute

Assistant Professor, Institute for Interdisciplinary Information Sciences (IIIS)
Tsinghua University

Brian Tse 谢旻希

Founder and CEO
Concordia AI

Fynn Heide

Executive Director
Safe AI Forum

Lu Chaochao 陆超超

Research Scientist
Shanghai AI Laboratory

Fu Jie 付杰

Research Scientist
Shanghai AI Laboratory

Chen Xin 陈欣

PhD Student
ETH Zurich

Hu Naying 呼娜英

Senior Business Executive
The Artificial Intelligence Institute at the China Academy of Information and Communications Technology (CAICT)

Chair
Governance Group of AI Security, Security and Governance Committee of Artificial Intelligence Industry Alliance (AIIA)

Saad Siddiqui

Senior AI Policy Researcher
Safe AI Forum

Isabella Duan

AI Policy Researcher
Safe AI Forum

IDAIS-Venice, 2024

VENICE, ITALY September 5th-8th

Western and Chinese scientists: AI safety a “global public good”, global cooperation urgently needed.

Leading global artificial intelligence (AI) scientists gathered in Venice in September where they issued a call urging governments and researchers to collaborate to address AI risks. Computer scientists including Turing Award winners Yoshua Bengio and Andrew Yao, as well as UC Berkeley professor Stuart Russell, OBE and Zhang Ya-Qin, Chair Professor at Tsinghua University, convened for the third in a series of International Dialogues on AI Safety (IDAIS), hosted by the Safe AI Forum (SAIF) in collaboration with the Berggruen Institute.

The event took place over three days at the Casa dei Tre Oci in Venice and focused on safety efforts around so-called artificial general intelligence. The first day involved a series of discussions centered around the nature of AI risks and the variety of strategies required to counter them. Session topics included early warning thresholds, AI Safety Institutes, verification and international governance mechanisms.

These discussions became the basis of a consensus statement signed by the scientists, centered around the idea that AI safety is a “global public good”, suggesting that states carve out AI safety as a cooperative area of academic and technical activity. The statement calls for three areas of policy and research. First, they advocate for “Emergency Preparedness Agreements and Institutions”, a set of global authorities and agreements which could coordinate on AI risk. Then, they suggest developing “Safety Assurance Frameworks”, a more comprehensive set of safety guarantees for advanced AI systems. Finally, they advocate for more AI Safety funding and research into verification systems to ensure that safety claims made by developers or states are trustworthy. The full statement can be read below.

On the second day, scientists were joined by a group of policymakers, former President of Ireland Mary Robinson and other experts. The scientists emphasized the urgency of employing these proposals given the rapid pace of AI development. The statement was presented directly to the policymakers, and the group strategized about how the international community may work together to accomplish these goals.

MPR61870 (3)

Statement

ENGLISH 中文声明

The global nature of AI risks makes it necessary to recognize AI safety as a global public good"

由于人工智能带来的风险具有全球性,我们必须将人工智能安全视为全球公共产品"

Signatories

Yoshua Bengio

Professor
Université de Montréal
Founder and Scientific Advisor
Mila – Quebec AI Institute
Chair
International Scientific Report on the Safety of Advanced AI
Turing Award Winner

Andrew Yao 姚期智

Turing Award Winner

Dean

Shanghai Qi Zhi Institute

Dean, Institute for Interdisciplinary Information Sciences and College of AI
Tsinghua University

Geoffrey Hinton

Professor Emeritus, Department of Computer Science
University of Toronto
Turing Award Winner
Nobel Prize Winner

Zhang Ya-Qin 张亚勤

Chair Professor and Dean of the Institute for AI Industry Research (AIR),
Tsinghua University
Former President of Baidu

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Gillian K. Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University

Mary Robinson

Former President of Ireland
Former United Nations High Commissioner for Human Rights
Adjunct Professor in Climate Justice
Trinity College Dublin

Xue Lan 薛澜

Dean, Schwarzman College
Tsinghua University
Director, Institute for AI International Governance (I-AIIG)
Tsinghua University

Mariano-Florentino (Tino) Cuéllar

President
Carnegie Endowment for International Peace
Director of Freeman Spogli Institute
Stanford University
Former California Supreme Court Justice

Fu Ying 傅莹

Zeng Yi 曾毅

Director, International Research Center for AI Ethics and Governance
Chinese Academy of Sciences (CAS)
Deputy Director, Research Center for Brain-inspired Intelligence
Chinese Academy of Sciences (CAS)
Member, High-level Advisory Body on AI
United Nations

He Tianxing 贺天行

Principal Investigator
Shanghai Qi Zhi Institute

Assistant Professor, Institute for Interdisciplinary Information Sciences (IIIS)
Tsinghua University

Lu Chaochao 陆超超

Research Scientist
Shanghai AI Laboratory

Kwok-Yan Lam

Executive Director
Digital Trust Centre (DTC) and Singapore’s AI Safety Institute
Associate Vice President
Nanyang Technological University (NTU)
Professor, School of Computer Science and Engineering
Nanyang Technological University (NTU)

Tang Jie 唐杰

Chief Scientist
Zhipu AI
Professor of Computer Science
Tsinghua University

Dawn Nakagawa

President
The Berggruen Institute

Benjamin Prud’homme

Robert Trager

Director, Oxford Martin AI Governance Initiative
University of Oxford

Yang Yaodong 杨耀东

Assistant Professor, Institute for Artificial Intelligence
Head, PKU Alignment and Interaction Research Lab (PAIR)
Peking University

Yang Chao 杨超

Research Scientist
Shanghai AI Laboratory

Zhang HongJiang 张宏江

Founding Chairman
BAAI
Foreign member
US National Academy of Engineering

Wang Zhongyuan 王仲远

Director
BAAI

Sam R. Bowman

Member of Technical Staff,
Anthropic, PBC
Associate Professor of Data Science, Computer Science and Linguistics
New York University

Dan Baer

Sebastian Hallensleben

Chair
CEN-CENELEC JTC 21
Head of Digitalisation and Artificial Intelligence
VDE Association for Electrical Electronic and Information Technologies
Member
Expert Advisory Board of the European Union

Ong Chen Hui

Assistant Chief Executive
Infocomm and Media Development Authority (IMDA) of Singapore

Fynn Heide

Executive Director
Safe AI Forum

Conor McGurk

Managing Director
Safe AI Forum

Saad Siddiqui

Senior AI Policy Researcher
Safe AI Forum

Isabella Duan

AI Policy Researcher
Safe AI Forum

Adam Gleave

Founder and CEO
FAR.AI

Xin Chen

PhD Student, ETH Zurich

IDAIS-Beijing, 2024

BEIJING, CHINA March 10th-11th

International scientists meet in Beijing to discuss extreme AI risks, recommend red lines for AI development and international cooperation.

Leading global AI scientists convened in Beijing for the second International Dialogue on AI Safety (IDAIS-Beijing). During the event, computer scientists including Turing Award winners Yoshua Bengio, Andrew Yao, and Geoffrey Hinton and more worked with governance experts such as Tsinghua professor Xue Lan and University of Toronto professor Gillian Hadfield to chart a path forward on international AI safety.

The event took place over two days at the Aman Summer Palace in Beijing and focused on safely navigating the development of Artificial General Intelligence (AGI) systems. The first day involved technical and governance discussions of AI risk, where scientists shared research agendas in AI safety but also potential regulatory regimes. The discussion culminated in a consensus statement recommending a set of red lines for AI development to prevent catastrophic and existential risks from AI. In the consensus statement, the scientists advocate for prohibiting development of AI systems that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks. Additionally, the statement laid out a series of measures to be taken to ensure those lines are never crossed. The full statement can be read below.

On the second day, the scientists met with senior Chinese officials and CEOs. The scientists presented the red lines proposal and discussed existential risks from artificial intelligence, and officials expressed enthusiasm about the consensus statement. Discussions focused on the necessity of international cooperation on this issue.

IDAIS-Bejing-1

Statement

ENGLISH 中文声明

In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.”

在过去冷战最激烈的时候,国际科学界与政府间的合作帮助避免了热核灾难。面对前所未有的技术,人类需要再次合作以避免其可能带来的灾难的发生.

Signatories

Geoffrey Hinton

Professor Emeritus, Department of Computer Science
University of Toronto
Turing Award Winner
Nobel Prize Winner

Andrew Yao 姚期智

Turing Award Winner

Dean

Shanghai Qi Zhi Institute

Dean, Institute for Interdisciplinary Information Sciences and College of AI
Tsinghua University

Yoshua Bengio

Professor
Université de Montréal
Founder and Scientific Advisor
Mila – Quebec AI Institute
Chair
International Scientific Report on the Safety of Advanced AI
Turing Award Winner

Ya-Qin Zhang 张亚勤

Chair Professor of AI Science
Tsinghua University
Dean of Institute for AI Industry Research (AIR)
Tsinghua University
Former President
Baidu

Fu Ying 傅莹

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Xue Lan 薛澜

Dean, Schwarzman College
Tsinghua University
Director, Institute for AI International Governance (I-AIIG)
Tsinghua University

Gillian K. Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University

HongJiang Zhang

Founding Chairman

BAAI

Huang Tiejun 黄铁军

Chairman
BAAI
Professor at the School of Computer Science
Peking University

Zeng Yi 曾毅

Director, International Research Center for AI Ethics and Governance
Chinese Academy of Sciences (CAS)
Deputy Director, Research Center for Brain-inspired Intelligence
Chinese Academy of Sciences (CAS)
Member, High-level Advisory Body on AI
United Nations

Robert Trager

Director, Oxford Martin AI Governance Initiative
University of Oxford

Kwok-Yan Lam

Executive Director
Digital Trust Centre (DTC) and Singapore’s AI Safety Institute
Associate Vice President
Nanyang Technological University (NTU)
Professor, School of Computer Science and Engineering
Nanyang Technological University (NTU)

Dawn Song

Professor, Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Founder
Oasis Lab

Zhongyuan Wang

Director

BAAI

Dylan Hadfield-Menell

Assistant Professor, Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology (MIT)
Algorithmic Alignment Group Lead, Computer Science and AI Laboratory (CSAIL)
Massachusetts Institute of Technology (MIT)
Early Career Fellow
AI2050

Yaodong Yang

Assistant Professor

Institute for AI, Peking University

Head

PKU Alignment and Interaction Research Lab (PAIR)

Zhang Peng 张鹏

Founder and CEO
Zhipu AI

Li Hang 李航

Head of Research
Bytedance
Fellow
ACM, ACL, and IEEE

Tian Tian 田天

CEO
RealAI

Tian Suning, Edward 田溯宁

Founder and Chairman
China Broadband Capital Partners LP (CBC)
Chairman
AsiaInfo Group

×

Ginger is not active here 😥
Please reload your page to activate your extension.

Toby Ord

Senior Researcher
Oxford University

Fynn Heide

Executive Director
Safe AI Forum

Adam Gleave

Founder and CEO
FAR.AI

IDAIS-Oxford, 2023

DITCHLEY PARK, UK October 31st

In the Inaugural IDAIS, International Scientists Call for Global Action on AI Safety.

Ahead of the highly anticipated AI Safety Summit, leading AI scientists from the US, the PRC, the UK and other countries agreed on the importance of global cooperation and jointly called for research and policies to prevent unacceptable risks from advanced AI.

Prominent scientists gathered from the USA, the PRC, the UK, Europe, and Canada for the first International Dialogues on AI Safety. The meeting was convened by Turing Award winners Yoshua Bengio and Andrew Yao, UC Berkeley professor Stuart Russell, OBE, and founding Dean of the Tsinghua Institute for AI Industry Research Ya-Qin Zhang. The event took place at Ditchley Park near Oxford. Attendees worked to build a shared understanding of risks from advanced AI systems, inform intergovernmental processes, and lay the foundations for further cooperation to prevent worst-case outcomes from AI development.

Statement

ENGLISH 中文声明

Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity.”

在人工智能安全研究和治理方面协调一致的全球行动对于防止不受控制的前沿人工智能发展给人类带来不可接受的风险至关重要。”

Signatories

Andrew Yao 姚期智

Turing Award Winner

Dean

Shanghai Qi Zhi Institute

Dean, Institute for Interdisciplinary Information Sciences and College of AI
Tsinghua University

Yoshua Bengio

Professor
Université de Montréal
Founder and Scientific Advisor
Mila – Quebec AI Institute
Chair
International Scientific Report on the Safety of Advanced AI
Turing Award Winner

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Ya-Qin Zhang 张亚勤

Chair Professor of AI Science
Tsinghua University
Dean of Institute for AI Industry Research (AIR)
Tsinghua University
Former President
Baidu

Ed Felten

Robert E. Kahn Professor of Computer Science and Public Affairs
Princeton University
Founding Director, Center for Information Technology Policy
Princeton University

Roger Grosse

Associate Professor of Computer Science
University of Toronto
Founding Member, Vector Institute for Artificial Intelligence
University of Toronto

Gillian K. Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University

Sana Khareghani

Professor of Practice in AI
King’s College London
AI Policy Lead
Responsible AI UK
Former Head of UK Government Office for Artificial Intelligence

Dylan Hadfield-Menell

Assistant Professor, Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology (MIT)
Algorithmic Alignment Group Lead, Computer Science and AI Laboratory (CSAIL)
Massachusetts Institute of Technology (MIT)
Early Career Fellow
AI2050

Karine Perset

Acting Head, AI and Emerging Digital Technologies Division
Organisation for Economic Co-operation and Development (OECD)
Professor, Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology (MIT)

Dawn Song

Professor, Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Founder
Oasis Lab

Xin Chen

PhD Student, ETH Zurich

Max Tegmark

Professor, Center for Brains, Minds and Machines (CBMM)
Massachusetts Institute of Technology (MIT)
President and Co-founder
Future of Life Institute

Elizabeth Seger

Director of Digital Policy
Demos

Yi Zeng

Professor and Director of Brain-inspired Cognitive Intelligence Lab

Institute of Automation, Chinese Academy of Sciences

Founding Director

Center for Long-term AI

HongJiang Zhang

Founding Chairman

BAAI

Yang-Hui He 何杨辉

Fellow
London Institute

Adam Gleave

Founder and CEO
FAR.AI

Fynn Heide

Executive Director
Safe AI Forum

Upcoming Events

TBA Early 2026

We plan to host our fifth IDAIS event in early 2026. For more details, contact us using the form below.