Weekend Reads: Discovering New Medicines with AI

by Kevin Schofield


This weekend’s reading is an essay examining an often overlooked aspect of the tools pharmaceutical researchers use to discover potential new drugs.

Artificial intelligence, or AI, is being integrated into more and more business processes. You can roughly think of AI, at least in its current form, as software that can take an existing data set on a particular topic, learn from it, and then extrapolate to new scenarios. Computers playing chess are trained using past games. Computer vision systems receive photos that are tagged with the items in the photos, and the systems learn to recognize other examples of those items.

In the world of drug discovery, AI systems train on examples of molecules that have been shown to have therapeutic value, as well as those that are toxic. The tools then extrapolate new potential molecules that present the therapeutic aspects while avoiding the toxic aspects. These drug candidates can then be further screened, synthesized and tested.

A group of researchers from Pharmaceutical Collaborations in North Carolina have been invited by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection to present at the institute’s biannual conference on “How AI Technologies for the Discovery of drugs could potentially be misused”. Although it wasn’t something they had spent much time thinking about, in preparation for the presentation, they took their own “molecule generator” AI software and reversed the settings: they asked him to optimize toxicity rather than therapeutic value.

In less than six hours, the software had designed 40,000 potentially toxic molecules. Among these candidates were the nerve agent VX, one of the most toxic chemical warfare agents of the 20th century, and several other known chemical warfare agents. He generated them out of nothing; none of these agents were in the dataset used to train the AI.

“Without being overly alarmist, this should serve as a wake-up call to our colleagues in the ‘AI in Drug Discovery’ community,” the researchers wrote. Their molecule generator was based on open source tools readily available and widely used in the pharmaceutical community; while these researchers claim to have destroyed the results of their thought experiment, it would be easy for bad actors to use the same tools and public toxicity datasets to repeat their experiment and design their own new and toxic chemicals.

As the researchers point out, much of our public debate about the potential misuse of AI concerns privacy, discrimination, and security, but not national and international security issues such as the development of new weapons. chemical and biological. But the hard part here is that the tools are the same, whether they’re used to generate life-saving new drugs or new chemical warfare agents. How do you control the use of these tools so that you have one without the other?

In many ways, however, this is a constantly revisited theme for technological advancements. Microsoft Office and Google Docs can be used to draft both hate-filled screeds and hate crime legislation. Accounting software fuels the scrappy start-up on the street as well as crime syndicates. AI-powered computer vision software can search for videos of wanted criminals and can monitor innocent citizens. 3D printers can make useful tools, as well as unregistered, untraceable, and often undetectable weapons.

The most common defense we hear from the tech industry is that technology is “unethical”; technology itself is neither good nor bad, while the uses of that technology – and the people who use them in that way – are what we need to control. Easier said than done, however; in most cases where the technology is freely available, we only find out about abuse after the damage is done.

The researchers suggest a few ideas on how to reduce some of the worst potential outcomes of the dual nature of AI-based tools. One is to accelerate the development of ethical guidelines for these emerging areas of concern, followed by enhanced training for professionals on ethical boundaries. Another is to place the AI ​​software and accompanying data models “in the cloud” where access can be regulated, rather than allowing them to be downloaded to private machines where unknown actors can use them at random. malicious purposes.

But in truth, as the researchers put it, “the genie came out of the medicine bottle.” The tools that already exist are enough to help someone with bad intentions cause a lot of harm. Add this to the list of challenges in a technologically advancing society: how to empower people to do good while preventing them from doing bad.

Dual Use of Artificial Intelligence-Based Drug Discovery


Kevin Schofield is a freelance writer and the founder of Seattle City Council Overview, a website providing independent information and analysis about the Seattle City Council and City Hall. He also co-hosts the “Seattle News, Views and Brews” podcast with Brian Callanan, and appears occasionally on Converge Media and KUOW’s Week in Review.

📸 Featured Image: Image by PopTika/Shutterstock.com

Before you move on to the next story …
Please consider that the article you just read was made possible by the generous financial support of donors and sponsors. The Emerald is a BIPOC-led nonprofit news outlet with the mission of offering a wider lens of our region’s most diverse, least affluent, and woefully under-reported communities. Please consider making a one-time gift or, better yet, joining our Rainmaker Family by becoming a monthly donor. Your support will help provide fair pay for our journalists and enable them to continue writing the important stories that offer relevant news, information, and analysis. Support the Emerald!

Comments are closed.