London, 2026

LONDON, UK April 17th–19th

Statement

ENGLISH 中文声明

We recommend states recognise the common threat posed by the proliferation of AI-enabled cyberattack and biological misuse capabilities and commit to prevention efforts.

Continuously advancing AI capabilities are empowering malicious, non-state actors in two ways: lowering the floor for who can cause harm, and raising the ceiling on how much harm can be caused.

The potential for harm is currently most acute in the cyber domain. Offensive operations that once required weeks of specialist labour can now be executed by some frontier AI systems, and through them, anyone with access to these systems. Attacks once aimed at individual targets can be deployed at scale, and sophisticated exploits that once required well-resourced actors—cyberattacks that shut down hospitals, water supply systems, the electrical grid, financial markets, or air traffic control—can be performed by far more people at far greater scale.

The same pattern, on a slower timeline, is also unfolding in biology. Frontier AI systems are outperforming PhD-level experts on tasks relevant to pathogen design, putting lower-level biological capabilities within reach of non-experts. At the same time, advanced AI systems can modify and design increasingly sophisticated pathogens. If these trends continue, AI systems may soon enable far more actors to attempt biological attacks with capabilities that would cause mass casualties and overwhelm healthcare systems.

The rapid emergence of capable AI agents compounds these risks, increasing the speed and scale of potential cyber and biological misuse.

We assess that dual-use capabilities that were previously concentrated in states—or that required state-level resources to acquire—are fast becoming accessible to non-state actors, from terrorist groups to lone individuals. Currently deployed AI safeguards are far from adequate, and basic technological and societal defences remain embryonic and unevenly deployed across jurisdictions. Coordinated action from frontier AI developers, governments, research institutions, and critical infrastructure entities is necessary if we are to prevent societal-scale harm from AI-enabled attacks.

We recommend states recognise the common threat posed by the proliferation of AI-enabled cyberattack and biological misuse capabilities and commit to prevention efforts. Leading AI jurisdictions have a particular responsibility for coordination. This includes the US and China in particular, as well as other jurisdictions with significant AI development, deployment, and evaluation capabilities.

AI-Enabled Cyberattacks Are an Imminent Threat

On current trajectories, non-state actors1 with minimal capacity will gain access to certain state-level cyberattack capabilities within the next year, creating unprecedented risks to critical infrastructure and national security. Some damage is now likely, as preparation has lagged. But urgent action can still contain harm to remain below societal-scale and build the defensive capabilities needed for what comes next.

Frontier AI systems already complete, in hours, coding work that would take expert teams weeks and can discover and exploit vulnerabilities in the world’s most scrutinised codebases—including every major operating system and browser. Societies are nowhere near prepared for this imminent threat.

The measures below are necessary but not sufficient; they are illustrative steps in a continuous process of defensive improvement.

Priorities for addressing AI-enabled cyberattacks:

  • Protect critical infrastructure. Governments should urgently accelerate work to harden critical systems against AI-enabled attacks, drawing on national cybersecurity agencies, critical-infrastructure operators, and the defensive cybersecurity community.
  • Develop the capacity to evaluate frontier AI systems for cyberattack capabilities. Building on existing efforts, leading AI jurisdictions should develop domestic capacity to evaluate frontier AI systems for cyberattack capabilities.
  • Require pre-deployment testing and delay wider availability where warranted. Governments and developers should jointly establish frontier cyber capability thresholds that trigger pre-deployment testing and delayed release, codified through a legal compact rather than voluntary commitments alone. Such delays are not necessarily permanent; they create space for risk evaluation and remediation, and are particularly important for frontier open-weight releases, which are irreversible.
  • Implement access controls on frontier AI systems with advanced cyber capabilities. Developers, supported by governments, should provide early access to validated defenders, enabling remediation before wider deployment. For models with the most advanced cyberattack capabilities, developers should adopt robust access controls, such as identity verification, to detect misuse and restrict access for malicious actors. These measures should be paired with continued investment in misuse monitoring, with appropriate safeguards for user privacy throughout.
  • Establish information sharing and vulnerability disclosure mechanisms. Governments, working with industry, should build mechanisms to share AI-enabled cyber threat indicators and enable collective detection and disruption of misuse. Cross-jurisdictional arrangements among leading AI jurisdictions are essential. Developer coordination with law enforcement and international bodies (e.g., Interpol) supporting the identification and deterrence of malicious actors would be welcome.
  • Invest in long-term resilience. Public investment should accelerate long-term resilience of critical digital infrastructure. No single technical approach is sufficient; sustained progress requires a portfolio of measures, including modernising legacy systems, translating code into memory-safe languages, deploying AI for defence, and applying formal verification.

 

AI-Enabled Biological Misuse

In the near future, AI may put high-consequence biological capabilities—potentially including the design of pathogens more dangerous than anything found in nature—within reach of actors who cannot access them today. The consequences could be catastrophic for public health and carry devastating economic impact. These risks must be managed if society is to harness AI's benefits for the life sciences.

Frontier AI capabilities relevant to biological misuse are growing rapidly. In two years, frontier AI systems have gone from scoring well below PhD-level experts to outperforming them on open-ended questions, protocol generation, and laboratory troubleshooting. Controlled studies show they provide substantial virology uplift in protocol generation over Internet access alone, and, as of mid-2025, are able to assist non-experts in some steps of laboratory work. Genome language models have already generated novel viable organisms. This risk is further compounded by the emergence of AI agents that can coordinate multi-step laboratory and information-gathering tasks and help build new specialised biological AI models.

Current model-level safeguards—particularly for open-weight models—are fragile and routinely bypassed. Better defences against AI-enabled biological attacks exist, including nucleic acid synthesis screening, personal protective equipment, air filtration, antivirals, and biosurveillance, but remain nascent and unevenly deployed.

Priorities for addressing AI-enabled biological misuse:

  • Harden AI safeguards against high-consequence biological misuse. No single mechanism is sufficient. Developers should invest in a portfolio of model-level safeguards (e.g., pre-training data filtering, refusal training) and system-level safeguards (e.g., trusted access programmes, misuse monitoring via classifiers). Governments should fund sustained safety research to strengthen these safeguards over time and enable information-sharing of best practices between companies. Until broader defences mature, high-consequence AI biological capabilities and datasets should be limited to validated users, such as researchers at vetted institutions.
  • For frontier proprietary models: strengthen refusal training, identity-verified access controls, and targeted usage monitoring against high-consequence biological misuse. Developers should continue to harden safeguards against jailbreaking and other adversarial attacks, build out trusted-access programmes, and improve classifier performance, especially in distinguishing legitimate from malicious use.
  • For frontier open-weight models: research and institute pre-training data filtering for high-consequence biological knowledge. Pre-training data filtering is one of the few potential mitigations for open-weight releases. While not a durable standalone solution, it may meaningfully slow proliferation while stronger safeguards are developed. Developers and researchers should continue to improve filtering methods, assess their effectiveness at production scale, release open-source implementations, and pursue further research into open-weight model safeguards. Governments should help secure the most concerning dual-use pathogen datasets against malicious training and fine-tuning.
  • Develop the capacity to evaluate frontier AI systems for biological uplift. Leading AI jurisdictions should build domestic capacity to evaluate frontier systems for meaningful uplift for biological misuse for both experts and non-experts. Tiered oversight mechanisms analogous to biosafety-level (BSL) frameworks—with relevant authorities receiving early access to evaluate systems with potentially high-consequence biological uplift—should be developed, so consequential judgements are not left to developers alone.
  • Coordinate internationally on nucleic acid synthesis screening. Governments and relevant expert communities with synthesis capabilities should coordinate on comprehensive and mandatory nucleic acid synthesis screening as an immediate priority, building on existing national regulatory frameworks.
  • Accelerate defence against AI-enabled biological attacks. Governments should invest in the capabilities needed to detect, contain, and respond to AI-enabled biological threats, with particular attention to regions where gaps in biosecurity capacity create shared vulnerabilities. Such research should be structured to avoid advancing offensive capabilities as a byproduct.

The safeguards, controls, and societal defences set out above are necessary and urgent. A serious AI-enabled catastrophe would not just cause enormous direct harm, but would also be catastrophic for public trust in AI systems, squandering AI’s substantial societal benefits. The Chernobyl disaster decimated the global nuclear industry and still casts a shadow over civilian nuclear power despite vastly safer modern designs. With trillions of dollars and millions of lives at stake, we will need immediate action and sustained investment from governments, industry, and the research community.

AI-enabled cyberattacks are already upon us, and significant damage is likely in the coming year. Biological misuse is not far behind. We have warned of exactly this dynamic: AI capabilities are outpacing the world’s ability to prepare. What is now unfolding in cyber should serve as a warning for biological risks and for the broader set of risks ahead, including loss of control over increasingly autonomous AI systems. Preparing before capabilities emerge, rather than after, must now be the default posture.


1 We use “non-state actors” to include individuals, criminal groups, and other non-governmental entities with intent and capacity to misuse AI capabilities.

我们建议各国认识到人工智能赋能的网络攻击能力与生物滥用能力扩散所带来的共同威胁,并承诺共同努力防范这些威胁。

人工智能能力的持续跃升,正从两个方面增强恶意非国家行为体的力量:既降低了制造危害的门槛,也抬高了所能造成危害的上限。

目前,这种潜在危害在网络攻击领域表现得最为紧迫和显著。前沿人工智能系统已能够执行以往需要专业团队耗费数周的攻击性行动,使得任何能够访问这些系统之人皆可借此实施攻击。曾经仅针对单一目标的攻击如今可被大规模部署;曾经仅有资源充足的行为体才能发动的复杂漏洞利用——例如关停医院、供水系统、电网、金融市场或空中交通管制的网络攻击——正以远超以往的规模落入远超以往的人群手中。

同样的态势也在生物领域演进,只是节奏相对较缓。前沿人工智能系统在与病原体设计相关的任务上已超越博士级专家,使较低层级的生物能力逐步进入非专业人员的可及范围。与此同时,高级的人工智能系统已能够修改和设计日益复杂精密的病原体。若上述趋势持续下去,人工智能系统可能很快会使更多行为体得以发动具备造成大规模人员伤亡、压垮医疗系统能力的生物攻击。

具备行动能力的智能体的迅速涌现进一步放大了上述风险,使网络与生物领域潜在滥用的速度与规模不断上升。

我们判断,过去集中在国家层面、或需要国家级规模的资源才能获取的两用能力,正快速落入非国家行为体——从恐怖组织到孤狼式个人——的可及范围。目前已部署的人工智能防护措施远远不够,基础性的技术与社会防御体系仍处于萌芽阶段,且在各司法管辖区之间部署不均。若要防止人工智能赋能的攻击对全社会造成危害,前沿人工智能开发者、各国政府、研究机构和关键基础设施实体的协调行动不可或缺。

我们建议各国认识到人工智能赋能的网络攻击能力与生物滥用能力扩散所带来的共同威胁,并承诺共同努力防范这些威胁。主要人工智能司法管辖在协调方面负有特殊责任。这尤其包括美国和中国,以及其他在人工智能开发、部署和评估方面具有重要能力的司法管辖区。

人工智能赋能的网络攻击是迫在眉睫的威胁

按当前发展趋势,资源极为有限的非国家行为体1,将在未来一年内获得某些国家级网络攻击能力,从而对关键基础设施与国家安全系统构成前所未有的风险。由于应对准备滞后,一定程度的损害如今已难以避免。但紧急行动仍有可能将危害控制在低于全社会规模的范围内,并为下一阶段所需的防御能力奠定基础。

前沿人工智能系统已能在数小时内完成专家团队需要耗费数周的编程工作,并能够在全球审查最严格的代码库(包括所有主流操作系统与浏览器)中发现并利用漏洞。人类社会对这一迫在眉睫的威胁远未做好应对准备。

以下措施必要但不充分;它们只是防御持续改进过程中的若干示例性步骤。

应对人工智能赋能网络攻击的优先事项:

  • 保护关键基础设施。各国政府应紧急加快工作步伐,依托国家网络安全机构、关键基础设施运营者以及防御性网络安全领域的专业力量,增强关键系统抵御人工智能赋能攻击的能力。
  • 建设对前沿人工智能系统网络攻击能力的评测能力。在现有工作的基础上,主要人工智能司法管辖区应发展自主评测能力,对前沿人工智能系统网络攻击能力进行评测。
  • 要求开展部署前测试,并在必要时延迟更广泛的开放。各国政府与开发者应协同合作,共同确立前沿网络能力阈值,以触发部署前测试与延迟发布,并通过法律契约而非仅依靠自愿承诺将其确立下来。此类推迟并非必然永久;它们可为风险评估与漏洞修复留出余地,这对于一经发布便无法召回的前沿开放权重模型而言尤为重要。
  • 对具备高级网络能力的前沿人工智能系统实施访问控制。开发者应在政府支持下,向经过验证的防御者提供早期访问权限,使其得以在更大范围部署前完成漏洞修复。对于具备最先进网络攻击能力的模型,开发者应采取稳健的访问控制措施(如身份验证),以检测滥用行为并限制恶意行为体的访问。上述措施应与对滥用监控的持续投入相辅相成,并在全过程中为用户隐私提供适当保障。
  • 建立信息共享与漏洞披露机制。各国政府应与产业界协调配合,构建人工智能赋能网络威胁指标的共享机制,以实现对滥用情况的集体检测与协同处置。主要人工智能司法管辖区之间的跨司法管辖区安排至关重要。我们乐见开发者与执法机构及国际组织(如国际刑警组织)的协调配合,以支持对恶意行为体的识别与震慑。
  • 投资长期韧性建设。公共投资应加快关键数字基础设施长期韧性的建设。没有任何单一的技术路径足以应对;持续的进展需要采取组合性的措施,包括对遗留系统进行现代化改造、将代码转译为内存安全语言、部署人工智能用于防御,以及应用形式化验证。

 

人工智能赋能的生物滥用

在不远的将来,人工智能可能使一些高危生物能力——其中可能包括设计比自然界中任何病原体都更危险的人造病原体的能力——落入今天尚不具备此类能力的行为体手中。其后果可能对公共卫生造成灾难性影响,并带来毁灭性的经济冲击。唯有妥善管控上述风险,社会才能从人工智能赋能生命科学发展中获益。

与生物滥用相关的前沿人工智能能力正在迅速增长。在短短两年间,前沿人工智能系统在开放式问答、实验方案生成与实验室故障排查方面,已从低于博士级别专家的水平跃升至超越其表现。受控研究显示,与仅依靠互联网相比,这些系统在实验方案生成方面对病毒学操作提供了显著的能力提升;截至2025年年中,这些系统已协助非专业人士完成实验室工作的部分步骤。基因组语言模型已经生成出了全新的、可存活的生物体。能够规划并协调多步骤实验室任务与信息收集任务、并能协助构建新型专用生物人工智能模型的智能体的出现,进一步放大了这一风险。

当前的模型层面防护措施(尤其是针对开放权重模型的防护)十分脆弱且常被绕过。针对人工智能赋能生物攻击的更好的防御手段已然存在,包括核酸合成筛查、个人防护装备、空气过滤、抗病毒药物和生物监测系统,但仍处于初期阶段且部署不均。

应对人工智能赋能生物滥用的优先事项:

  • 强化人工智能防护措施以应对高危生物滥用。没有任何单一机制能够单独奏效。开发者应布局多元化的防护手段,包括模型层面的防护(如预训练数据过滤、拒答训练)以及系统层面的防护(如可信访问机制、基于分类器的滥用监测)。各国政府应对持续性的安全研究提供资助,以不断强化这些防护措施,并在企业之间促成最佳实践的信息共享。在更广泛的防御体系成熟之前,对高危人工智能生物能力及相关数据集的访问应仅限于经过验证的用户,例如来自经审核机构的研究人员。
  • 对于前沿闭源模型:应针对高危生物滥用强化拒答训练、基于身份验证的访问控制,以及针对性的使用监测。开发者应继续强化防护措施以抵御越狱及其他对抗性攻击,建立和完善可信访问机制,并提升分类器性能,尤其是其在区分合法与恶意使用方面的能力。
  • 对于前沿开放权重模型:应针对高危生物知识开展研究并建立预训练数据过滤机制。预训练数据过滤是开放权重模型发布场景下为数不多的潜在缓解措施之一。它虽不足以作为长期独立的解决方案,但在更强防护手段成熟之前,可以有效减缓扩散。开发者与研究者应持续改进过滤方法、评估其在生产规模上的实际效果,发布开源实现方案,并进一步推进针对开放权重模型防护措施的研究。各国政府应协助保护最令人担忧的两用病原体数据集,使其免遭恶意训练和微调。
  • 建设对前沿人工智能系统生物能力提升的评测能力。主要人工智能司法管辖区应发展自主评测能力,对前沿人工智能系统是否为专家和非专家在生物滥用方面提供实质性的能力提升进行评测。应建立与现有生物安全等级(BSL)框架类似的分级监督机制——相关主管机构可据此对具备潜在高危生物能力提升的系统获得早期评测权限——以确保重大判断不完全交由开发者自行作出。
  • 在核酸合成筛查方面开展国际协调。各国政府以及具备核酸合成能力的相关专家群体应在现有国家监管框架的基础上开展协调,将推进全面且具有强制力的核酸合成筛查作为当务之急。
    加速针对人工智能赋能生物攻击的防御建设。各国政府应投资于检测、遏制及应对人工智能赋能生物威胁所需的能力,并重点关注那些因生物安全能力不足而构成共同脆弱性的地区。相关研究的开展方式,应避免在客观上推动进攻性能力的发展。

上述防护措施、管控机制以及社会防御体系必要且紧迫。一场严重的人工智能赋能灾难不仅会造成巨大的直接危害,更会摧毁公众对人工智能系统的信任,并使人工智能本可带来的重大社会效益付诸东流。切尔诺贝利事故重创了全球核工业,至今仍让民用核能蒙上阴影,尽管现代反应堆设计已安全得多。面对高达数千亿美元的经济风险与数千万条生命的安危,我们需要各国政府、业界与学界立即采取行动并持续投入。

人工智能赋能的网络攻击已近在眼前,未来一年内可能造成重大损害。生物滥用也紧随其后。我们此前已就这一动态发出过警告:人工智能能力的发展正超越世界应对的准备能力。当前正在网络领域出现的问题,应成为生物领域以及更广泛未来风险(包括日益自主的人工智能系统的失控)的警示。在能力出现之前作出准备,而非在其出现之后,必须成为如今的默认姿态。


1 在本声明中,「非国家行为体」指未经政府授权的行为者,其动机包括但不限于意识形态和经济利益。此类行为体既可能是有组织的团体,也可能是个人。

Signatories

Andrew Yao 姚期智

Turing Award Winner

Dean

Shanghai Qi Zhi Institute

Dean, Institute for Interdisciplinary Information Sciences and College of AI
Tsinghua University

Yoshua Bengio

Professor
Université de Montréal
Founder and Scientific Advisor
Mila – Quebec AI Institute
Chair
International Scientific Report on the Safety of Advanced AI
Turing Award Winner

Yaqin Zhang 张亚勤

Chair Professor of AI Science and Dean
Institute for AI Industry Research (AIR), Tsinghua University

Former President
Baidu

Stuart Russell

Professor and Smith-Zadeh Chair in Engineering
University of California, Berkeley
Founder of Center for Human-Compatible Artificial Intelligence (CHAI)
University of California, Berkeley

Craig Mundie

President
Mundie & Associates

Fu Ying 傅莹

Xue Lan 薛澜

Dean, Schwarzman College
Tsinghua University
Director, Institute for AI International Governance (I-AIIG)
Tsinghua University

Max Tegmark

Professor, Center for Brains, Minds and Machines (CBMM)
Massachusetts Institute of Technology (MIT)
President and Co-founder
Future of Life Institute

Robert Trager

Director, Oxford Martin AI Governance Initiative
University of Oxford

Gillian K. Hadfield

Bloomberg Distinguished Professor of AI Alignment and Governance
Johns Hopkins University

Hu Xia 胡侠

Lead Scientist
Shanghai AI Laboratory

Jonathan Barry

Director of Policy
Mila – Quebec Artificial Intelligence Institute

Xu Wei 徐葳

Principal Investigator
Shanghai Qi Zhi Institute

Professor and Vice Dean of the Institute for Interdisciplinary Information Sciences
Tsinghua University

Xiao Qian 肖茜

Vice Dean
Institute of AI International Governance, Tsinghua University

Dave Orr

Head of Safeguards
Anthropic

Benjamin Prud’homme

Adam Gleave

Founder and CEO
FAR.AI

Dong Yinpeng 董胤蓬

Assistant Professor
College of AI, Tsinghua University

Nouha Dziri

Senior Research Scientist
Cohere

Lu Chaochao 陆超超

Research Scientist
Shanghai AI Laboratory

Seth Donoughe

Director of AI
SecureBio

Brian Tse 谢旻希

Founder and CEO
Concordia AI

Malcolm Murray

Research Lead
SaferAI

Fynn Heide

Executive Director
Safe AI Forum