3260 papers • 126 benchmarks • 313 datasets
Drug discovery is the task of applying machine learning to discover new candidate drugs. ( Image credit: A Turing Test for Molecular Generators )
(Image credit: Papersgraph)
These leaderboards are used to track progress in drug-discovery-9
Use these libraries to find drug-discovery-9 models and implementations
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Self-normalizing neural networks (SNNs) are introduced to enable high-level abstract representations and it is proved that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero meanand unit variance -- even under the presence of noise and perturbations.
This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
The junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network, which allows for incrementally expand molecules while maintaining chemical validity at every step.
A convolutional neural network that operates directly on graphs that allows end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape is introduced.
This work proposes Molecule Attention Transformer, a key innovation is to augment the attention mechanism in Transformer using inter-atomic distances and the molecular graph structure, and shows that attention weights learned are interpretable from the chemical point of view.
The DimeNet++ model is proposed, which is 8x faster and 10% more accurate than the original Dime net on the QM9 benchmark of equilibrium molecules, and ensembling and mean-variance estimation for uncertainty quantification are investigated with the goal of accelerating the exploration of the vast space of non-equilibrium structures.
This work shows that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing, and demonstrates that the properties of the generated molecules correlate very well with those of the molecules used to train the model.
Adding a benchmark result helps the community track progress.