International

AI could pose pandemic-scale biosecurity risks

Since July, researchers at Los Alamos National Laboratory in New Mexico have been assessing how the artificial intelligence (AI) model GPT-4o could assist humans with tasks in biological research. In the assessments — which are being conducted to advance innovations in bioscience as well as understand potential risks — humans ask GPT-4o various questions to help it carry out standard experimental tasks.

These include maintaining and propagating cells in vitro; separating cells and other components in a sample using a centrifuge; and introducing foreign genetic material into a host organism.

In these assessments, the Los Alamos researchers are collaborating with OpenAI, the San Francisco, California-based company that developed GPT-4o. These tests are among the few efforts aimed at addressing potential biosafety and biosecurity issues posed by AI models since OpenAI made ChatGPT, a chatbot based on a large language model (LLM), publicly available in November 2022.

We argue that much more needs to be done.

Three of us at the Johns Hopkins Center for Health Security in Baltimore, Maryland, investigate how scientific and technological innovations can affect public health and health security. Two of us research and develop solutions to public-policy challenges at the nonprofit think tank RAND, headquartered in Santa Monica, California.

Although we see the promise of AI-assisted biological research to improve human health and well-being, this technology is unpredictable and potentially presents significant risks. We urge governments to move quickly to clarify which risks require the most attention, and determine what adequate testing and mitigation measures should be put in place for these potential risks. In short, we call for a more deliberate approach that builds on decades of governmental and scientific experience in mitigating pandemic-scale risks in biological research1.

Rapid experimentation

GPT-4o is a \’multimodal\’ LLM. It can accept text, audio, image and video prompts, and it has been trained on vast quantities of these formats scraped from the internet and elsewhere – data that almost certainly includes millions of peer-reviewed studies in biological research. Its capabilities are still being tested, but previous work indicates its potential uses in the life sciences.

For example, in 2023, Microsoft (a major investor in OpenAI) published an evaluation of an earlier version of GPT-4, GPT-4o, showing that the LLM could provide step-by-step instructions for designing antibodies using the protein-design tool Rosetta that could bind to the spike protein of the coronavirus SARS-CoV-2.

It can also translate an experimental protocol into code for a robot that can handle liquids – a capability that is \”expected to greatly accelerate the automation of biology experiments\”2. Also in 2023, researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, showed that a system using GPT-4, called CoScientist, could design, plan, and execute complex experiments such as chemical synthesis.

In this case, the system was able to search documents, write code, and control a robotic lab device3. And earlier this month, researchers at Stanford University in California and the Chan Zuckerberg Biohub in San Francisco launched a virtual lab — a team of LLM agents powered by GPT4o that designed potent SARS-CoV-2 nanobodies (a type of antibody) with minimal human input4.

OpenAI released GPT-4o in May, and is expected to release its successor, GPT-5, in the coming months. Most other major AI companies have similarly improved their models.

So far, assessments have focused primarily on individual LLMs.

But AI developers hope that combinations of AI tools, including LLM, robotics and automation technologies, will enable experiments such as the manipulation, design and synthesis of drug candidates, toxins or stretches of DNA with minimal human involvement.

These advances promise to transform biomedical research. But they can also bring significant biosafety and biosecurity risks5. Indeed, many governments around the world have taken steps to try to mitigate such risks of cutting-edge AI models (see \’Racing to Keep Up\’). For example, in 2023, the US government obtained voluntary commitments from 15 major AI companies to manage the risks posed by the technology.

Later that year, US President Joe Biden signed an executive order on the safe, secure and trustworthy development and use of artificial intelligence. Among other things, it requires companies to notify the government before releasing models that are trained “primarily on biological sequence data” and that use “an amount of computing power greater than 1023 integer or floating-point operations.”

Leave a Reply

Your email address will not be published. Required fields are marked *