Industry and academia have used computer models for decades to predict toxicity, but regulators set a high bar for accepting these methods and tend to ask for animal studies instead.
RICHARD VAN NOORDEN: ‘Machine-learning software trained on masses of chemical-safety data is so good at predicting some kinds of toxicity that it now rivals — and sometimes outperforms — expensive animal studies, researchers report. Computer models could replace some standard safety studies conducted on millions of animals each year, such as dropping compounds into rabbits’ eyes to check if they are irritants, or feeding chemicals to rats to work out lethal doses, says Thomas Hartung, a toxicologist at Johns Hopkins University in Baltimore, Maryland. “The power of big data means we can produce a tool more predictive than many animal tests.”
In a paper published in Toxicological Sciences, Hartung’s team reports that its algorithm can accurately predict toxicity for tens of thousands of chemicals — a range much broader than other published models achieve — across nine kinds of test, from inhalation damage to harm to aquatic ecosystems… Bennard van Ravenzwaay, a toxicologist at the chemicals firm BASF in Ludwigshafen, Germany said, “I am 100% convinced this will be a pillar of toxicology in the future.” Still, it could be many years before government regulators accept computer results in place of animal studies.
Industry and academia have used computer models for decades to predict toxicity. These models typically incorporate a molecule’s chemical structure, an understanding of how it might react in the body and data from animal tests or in vitro studies. Companies also infer the toxic effects of untested substances by comparing them with other structurally or biologically similar compounds whose effects are known — a method known as ‘read-across’. But regulators set a high bar for accepting these methods and tend to ask for animal studies instead, Hartung and other toxicologists say’. SOURCE…