OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to fight against biological threats that could be created by non-experts using AI tools, according to announcements Wednesday by both organizations. The Los Alamos lab, first established in New Mexico during World War II to develop the atomic bomb, called the effort a “first of its kind” study on AI biosecurity and the ways that AI can be used in a lab setting.

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is pretty striking. OpenAI’s statement tries to paint the partnership as simply a study on how AI “can be used safely by scientists in laboratory settings to advance bioscientific research.” And yet the Los Alamos lab puts much more emphasis on the fact that previous research “found that ChatGPT-4 provided a mild uplift in providing information that could lead to the creation of biological threats.”

Much of the public discussion around threats posed by AI has centered around the creation of a self-aware entity that could conceivably develop a mind of its own and harm humanity in some way. Some worry that achieving AGI—advanced general intelligence, where the AI can perform advanced reasoning and logic rather than acting as a fancy auto-complete word generator—may lead to a Skynet-style situation. And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned into this characterization, it appears the more urgent threat to address is making sure people don’t use tools like ChatGPT to create bioweapons.

“AI-enabled biological threats could pose a significant risk, but existing work has not assessed how multimodal, frontier models could lower the barrier of entry for non-experts to create a biological threat,” Los Alamos lab said in a statement published on its website.

The different positioning of messages from the two organizations likely comes down to the fact that OpenAI could be uncomfortable with acknowledging the national security implications of highlighting that its product could be used by terrorists. To put an even finer point on it, the Los Alamos statement uses the terms “threat” or “threats” five times, while the OpenAI statement uses it just once.