Tentative: Fine tuned generation of molecules for material discovery with generative models.
In molecular discovery, we often seek to generate chemical species tailored to very specific needs. To this aim, different criteria, often encoded as heuristics, are applied; some typical examples include synthesizability, symmetry, and stability under normal conditions, as well as diversity and the sometimes vague notion of ‘performance’.
Here, we present Objective-Reinforced Generative Adversarial Networks (ORGAN), a method that combines two well-established techniques from the Machine Learning field: Generative Adversarial Networks (GANs) and reinforcement learning (RL), in order to accomplish this particular goal. In this way, while RL biases the molecular generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from previously observed data.
We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings, for the generation of molecules encoded as text sequences (SMILES). We explore several applications that would benefit from this approach, including drug design and organic photovoltaic materials discovery. For each each case we show that we can effectively bias the generation process towards the desired properties.