Artificial intelligence designs 40,000 chemical weapons in just 6 hours

The pharmaceutical company Collaborations Pharmaceuticals , Inc. repurposed a drug discovery.


AI  to successfully identify 40,000 new potential chemical weapons in just 6 hours.

In today’s society, artificial intelligence ( AI ) is mostly used for good. But what if it wasn’t?

This is the question researchers at Collaborations Pharmaceuticals asked themselves when conducting experiments using an AI that was built to search for useful drugs.

So they adjusted this AI to search for chemical weapons , and impressively, the machine learning algorithm found 40,000 options in just six hours, according to a paper published this month in the journal Nature Machine Intelligence .

“Naive” thinking

“The idea had never occurred to us before. We were vaguely aware of safety concerns around working with pathogens or toxic chemicals, but that didn’t relate to us; we mostly operate in a virtual environment. Our work is based on creating machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery,” the researchers wrote in their paper.

“We spent decades using computers and artificial intelligence to improve human health, not degrade it. We were naive to think about the possible misuse of our craft, as our goal had always been to avoid molecular features that could interfere with the many different classes of essential proteins for human life”.

The researchers said that even their work on Ebola and neurotoxins, which might have raised concerns about possible negative implications of their machine learning models, had not raised alarm bells. They were blissfully unaware of the damage they could inflict.

How did your experiment work?

Collaborations Pharmaceuticals had published machine learning computational models for toxicity prediction. All the researchers had to do was adapt their methodology to look for, rather than rule out, toxicity, and what they got was an exercise in thought that turned into a computational proof of concept for biochemical weapons.

The experiment is a clear indication of why we need to monitor AI models more closely and really think about the consequences of our work.