AIxBio
Artificial Intelligence and Biotechnology
The convergence of artificial intelligence and biotechnology (AIxBio) has the potential to yield extraordinary benefits as well as serious risks. Highly capable large-language models (LLMs) and biological-design tools (BDTs) that train on biological data are 2 of the more important classes of emerging AI models.
AI technologies could revolutionize how healthcare is organized and delivered, how medicines and vaccines are developed, how diseases are diagnosed, and the speed with which new outbreaks are detected. New AI models could also increase the risk of high-consequence accidents or misuse of biology and the life sciences. Two potential harms that are extraordinarily important to govern are AI models or tools that could currently or in the near to mid-term future, either on their own or when paired with other emerging or existing capabilities:
- Greatly accelerate or simplify the reintroduction of dangerous extinct viruses or dangerous viruses that only exist now within research labs that could have the capacity to start pandemics, panzootics, or panphytotics; or
- Substantially enable, accelerate, or simplify the creation of novel variants of pathogens or entirely novel biological constructs that could start pandemics, panzootics, or panphytotics.
These are not the only potential AI-enabled biological harms that should be governed, but governance efforts should address them at a minimum.
Balancing the potential benefits of AI in public health, biosecurity, and healthcare and the narrow subset of high-consequence dual-use risks is crucial. By conducting research and providing evidence-based recommendations, the Center for Health Security hopes to assist policymakers and other stakeholders address these challenges proactively, enabling the world to harness the full potential of AI in biotechnology while safeguarding against potential threats to health security.
Our Work on AIxBio
Policy Documents
December 3, 2024: Response to AISI'S RFI on Safety Considerations for Chemical and/or Biological AI Models
November 11, 2024: Response to DOE RFI on the Frontiers in AI for Science, Security, and Technology (FASST) Initiative
November 6, 2024: Summary of Biosecurity Provisions in the October 2024 National Security Memorandum (NSM) on Artificial Intelligence (AI)
September 9, 2024: Response to NIST AI 800-1, Managing Misuse Risk for Dual-Use Foundation Models
July 10, 2024: Summary of Paper: Prioritizing High-Consequence Biological Capabilities in Evaluations of AI Models (2 pages)
June 02, 2024: NIST AI 6001, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile Request for Comment
April 9, 2024: Response to the NSCEB’s Interim Report and AIxBio Policy Options
March 27, 2024: Response to NTIA RFC on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights
November 8, 2023: CHS Director Testimony: Avoiding a Cautionary Tale: Policy Considerations for Artificial Intelligence in Health Care
Newsroom
April, 09, 2024: Leaders in biosecurity commend NSCEB Interim Report and AIxBio policy options paper, make additional recommendations
February 06, 2024: Johns Hopkins Center for Health Security responds to NIST RFI on Implementing the Artificial Intelligence Executive Order to Guard Against High-Consequence Bio Risks
December 19, 2023: Johns Hopkins Center for Health Security publishes key takeaways from its meeting on the convergence of AI and biotechnology
Publications & Preprints
November 21, 2024: AI could pose pandemic-scale biosecurity risks. Here’s how to make it safer
August 23, 2024: AI and biosecurity: The need for governance
August 6, 2024: AIxBio: Opportunities to Strengthen Health Security
June 25, 2024: Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models (preprint)
Helpful Links
- NSM Framework to Advance AI Governance and Risk Management in National Security, October 24, 2024
- National Security Memorandum (NSM) on AI, October 24, 2024
- UN AI Advisory Body Report, September 2024
- EU AI Act, July 2024
- NTIA Dual-Use Foundation Models with Widely Available Model Weights Report, July 30, 2024
- NIST Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, July 26, 2024
- Seoul Declaration, May 21, 2024
- UK Gov's Interim Report, May 17, 2024
- Senate Roadmap for AI Policy and One pager, May 15, 2024
- OSTP Framework, April 29, 2024
- DHS Report on Reducing the Risks at the Intersection of AI and CBRN, Public Release June 20, 2024 and DHS Fact Sheet, April 29, 2024
- NSCEB AIxBio White Paper #4, January 2024
- Bletchley Declaration, November 1, 2023
- Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, October 30, 2023
- White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, October 30, 2023