1
Geometry-enhanced pretraining on interatomic potentials
2
Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization
3
Dual use of artificial-intelligence-powered drug discovery
4
Improving Molecular Contrastive Learning via Faulty Negative Mitigation and Decomposed Fragment Contrast
5
3D Infomax improves GNNs for Molecular Property Prediction
6
Pre-training Molecular Graph Representation with 3D Geometry
7
Motif-based Graph Self-Supervised Learning for Molecular Property Prediction
8
On the Opportunities and Risks of Foundation Models
9
Chemformer: a pre-trained transformer for computational chemistry
10
Dual-view Molecule Pre-training
11
Geometry-enhanced molecular representation learning for property prediction
12
RoFormer: Enhanced Transformer with Rotary Position Embedding
13
X-MOL: large-scale pre-training for molecular understanding and diverse molecular analysis
14
A merged molecular representation learning for molecular properties prediction with a web-based service
15
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
16
Rethinking Attention with Performers
17
Pushing the boundaries of molecular representation for drug discovery with graph attention mechanism.
18
ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Deep Learning and High Performance Computing
19
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
20
Rethinking Positional Encoding in Language Pre-training
21
Building powerful and equivariant graph neural networks with structural message-passing
22
BERTology Meets Biology: Interpreting Attention in Protein Language Models
23
Self-Supervised Graph Transformer on Large-Scale Molecular Data
24
Linformer: Self-Attention with Linear Complexity
25
The Message Passing Neural Networks for Chemical Property Prediction on SMILES.
26
Directional Message Passing for Molecular Graphs
27
5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding
28
A Simple Framework for Contrastive Learning of Visual Representations
29
Reformer: The Efficient Transformer
30
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
31
SMILES-BERT: Large Scale Unsupervised Pre-Training for Molecular Property Prediction
32
Self-Attention Based Molecule Representation for Predicting Drug-Target Interaction
33
RoBERTa: A Robustly Optimized BERT Pretraining Approach
34
Molecular Property Prediction: A Multilevel Quantum Interactions Modeling Perspective
35
Utilizing Edge Features in Graph Neural Networks via Variational Information Maximization
36
Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation
37
Strategies for Pre-training Graph Neural Networks
38
Provably Powerful Graph Networks
39
Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences
40
Analyzing Learned Molecular Representations for Property Prediction
41
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
42
LanczosNet: Multi-Scale Deep Graph Convolutional Networks
43
CheMixNet: Mixed DNN Architectures for Predicting Chemical Properties using Multiple Molecular Representations
44
Molecular Transformer: A Model for Uncertainty-Calibrated Chemical Reaction Prediction
45
PubChem 2019 update: improved access to chemical data
46
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
47
Representation Learning with Contrastive Predictive Coding
48
N-Gram Graph: Simple Unsupervised Representation for Graphs, with Applications to Molecules
50
Self-Attention with Relative Position Representations
51
DeepDTA: deep drug–target binding affinity prediction
52
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
53
Graph Attention Networks
54
Convolutional Embedding of Attributed Molecular Graphs for Physical Property Prediction
55
SchNet: A continuous-filter convolutional neural network for modeling quantum interactions
56
Attention is All you Need
57
Inductive Representation Learning on Large Graphs
58
Neural Message Passing for Quantum Chemistry
59
SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules
60
Modeling Relational Data with Graph Convolutional Networks
61
MoleculeNet: a benchmark for molecular machine learning
62
Low Data Drug Discovery with One-Shot Learning
63
Quantum-chemical insights from deep tensor neural networks
64
Semi-Supervised Classification with Graph Convolutional Networks
65
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
66
Gated Graph Sequence Neural Networks
67
Convolutional Networks on Graphs for Learning Molecular Fingerprints
68
Fast and accurate modeling of molecular atomization energies with machine learning.
69
Extended-Connectivity Fingerprints
70
SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules
71
Open-source cheminformatics
72
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
73
Visualizing Data using t-SNE
74
Daylight Chemical Information Systems, I. Smarts™-a language for describing molecular patterns
75
ZINC - A Free Database of Commercially Available Compounds for Virtual Screening
77
the MoLFormer framework, and designed experiments
78
Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization