Optimization of Molecules via Deep Reinforcement Learning

We present a framework, which we call Molecule Deep Q-Networks (MolDQN), for molecule optimization by combining domain knowledge of chemistry and state-of-the-art reinforcement learning techniques (double Q-learning and randomized value functions). We directly define modifications on molecules, ther...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 9; no. 1; p. 10752
Main Authors Zhou, Zhenpeng, Kearnes, Steven, Li, Li, Zare, Richard N, Riley, Patrick
Format Journal Article
LanguageEnglish
Published England Nature Publishing Group 24.07.2019
Nature Publishing Group UK
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a framework, which we call Molecule Deep Q-Networks (MolDQN), for molecule optimization by combining domain knowledge of chemistry and state-of-the-art reinforcement learning techniques (double Q-learning and randomized value functions). We directly define modifications on molecules, thereby ensuring 100% chemical validity. Further, we operate without pre-training on any dataset to avoid possible bias from the choice of that set. MolDQN achieves comparable or better performance against several other recently published algorithms for benchmark molecular optimization tasks. However, we also argue that many of these tasks are not representative of real optimization problems in drug discovery. Inspired by problems faced during medicinal chemistry lead optimization, we extend our model with multi-objective reinforcement learning, which maximizes drug-likeness while maintaining similarity to the original molecule. We further show the path through chemical space to achieve optimization for a molecule to understand how the model works.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-019-47148-x