3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in molecular-representation-11
No benchmarks available.
Use these libraries to find molecular-representation-11 models and implementations
No datasets available.
No subtasks available.
A graph convolutional model is introduced that consistently matches or outperforms models using fixed molecular descriptors as well as previous graph neural architectures on both public and proprietary data sets.
A novel framework, GROVER, which stands for Graph Representation frOm self-supervised mEssage passing tRansformer, which allows it to be trained efficiently on large-scale molecular dataset without requiring any supervision, thus being immunized to the two issues mentioned above.
This work makes one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via the ChemBERTa model, and suggests that transformers offer a promising avenue of future work for molecular representation learning and property prediction.
Translation between semantically equivalent but syntactically different line notations of molecular structures compresses meaningful information into a continuous molecular descriptor.
SELFIES (SELF-referencIng Embedded Strings), a string-based representation of molecules which is 100% robust and allows for explanation and interpretation of the internal working of the generative models.
The Transformer architecture is applied, specifically BERT, to learn flexible and high quality molecular representations for drug discovery problems, and molecular representations learnt by the model `MolBert' improve upon the current state of the art on the benchmark datasets.
The proposed framework is end-to-end permutation equivariant with respect to node ordering and achieves competitive results with several generative tasks including general graph generation, molecular generation, unsupervised molecular representation learning to predict molecular properties, link prediction on citation graphs, and graph-based image generation.
GeoSSL is proposed, a 3D coordinate denoising pretraining framework to model such an energy landscape based on the dynamic nature of 3D molecules, where the continuous motion of a molecule in the 3D Euclidean space forms a smooth potential energy surface.
This work proposes to adopt an equivariant energy-based model as the backbone for pretraining, which enjoys the merits of fulfilling the symmetry of 3D space, and develops a node-level pretraining loss for force prediction, where the Riemann-Gaussian distribution is exploited to ensure the loss to be E(3)-invariant, enabling more robustness.
Adding a benchmark result helps the community track progress.