Skip to main content
Our Work

AIxBio

Artificial Intelligence and Biotechnology

The convergence of artificial intelligence and biotechnology (AIxBio) has the potential to yield extraordinary benefits as well as serious risks. Highly capable large-language models (LLMs) and biological-design tools (BDTs) that train on biological data are 2 of the more important classes of emerging AI models. 

AI technologies could revolutionize how healthcare is organized and delivered, how medicines and vaccines are developed, how diseases are diagnosed, and the speed with which new outbreaks are detected. New AI models could also increase the risk of high-consequence accidents or misuse of biology and the life sciences. Two potential harms that are extraordinarily important to govern are AI models or tools that could currently or in the near to mid-term future, either on their own or when paired with other emerging or existing capabilities:    

  1. Greatly accelerate or simplify the reintroduction of dangerous extinct viruses or dangerous viruses that only exist now within research labs that could have the capacity to start pandemics, panzootics, or panphytotics;  or 
  2. Substantially enable, accelerate, or simplify the creation of novel variants of pathogens or entirely novel biological constructs that could start pandemics, panzootics, or panphytotics.   

These are not the only potential AI-enabled biological harms that should be governed, but governance efforts should address them at a minimum.

Balancing the potential benefits of AI in public health, biosecurity, and healthcare and the narrow subset of high-consequence dual-use risks is crucial. By conducting research and providing evidence-based recommendations, the Center for Health Security hopes to assist policymakers and other stakeholders address these challenges proactively, enabling the world to harness the full potential of AI in biotechnology while safeguarding against potential threats to health security.

Our Work on AIxBio