1
Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models
2
Pushing the boundaries of molecular representation for drug discovery with graph attention mechanism.
3
Large Associative Memory Problem in Neurobiology and Machine Learning
4
Set Distribution Networks: a Generative Model for Sets of Images
5
Synthesizer: Rethinking Self-Attention for Transformer Models
6
Modern Hopfield Networks and Attention for Immune Repertoire Classification
7
Encoding-based Memory Modules for Recurrent Neural Networks
8
MEMO: A Deep Network for Flexible Combination of Episodic Memories
9
PyTorch: An Imperative Style, High-Performance Deep Learning Library
10
HuggingFace's Transformers: State-of-the-art Natural Language Processing
11
Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving
12
A compact vocabulary of paratope-epitope interactions enables predictability of antibody-antigen binding
13
immuneSIM: tunable multi-feature simulation of B- and T-cell receptor repertoires for immunoinformatics benchmarking
14
Release Strategies and the Social Impacts of Language Models
15
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
16
Learning to Reason with Third-Order Tensor Products
17
Sharp bounds for the Lambert W function
18
A new mechanical approach to handle generalized Hopfield neural networks
19
Bag encoding strategies in multiple instance learning problems
20
Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
22
SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters
23
BRUNO: A Deep Recurrent Model for Exchangeable Data
24
Attention-based Deep Multiple Instance Learning
25
Analysis on Polish Spaces and an Introduction to Optimal Transportation
26
Decoupled Weight Decay Regularization
27
Automatic differentiation in PyTorch
28
Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization
29
Attention is All you Need
30
Self-Normalizing Neural Networks
31
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
32
Neural Message Passing for Quantum Chemistry
33
On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning
34
Immunosequencing identifies signatures of cytomegalovirus exposure history and HLA-mediated effects on the T cell repertoire
36
Frustratingly Short Attention Spans in Neural Language Modeling
37
On a Model of Associative Memory with Huge Storage Capacity
38
On the Maximum Storage Capacity of the Hopfield Model
39
Dense Associative Memory Is Robust to Adversarial Inputs
40
Permutation-equivariant neural networks applied to dynamics prediction
41
Multiple instance learning: A survey of problem characteristics and applications
42
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
43
Using Fast Weights to Attend to the Recent Past
44
Computational Modeling of β-Secretase 1 (BACE-1) Inhibitors Using Ligand Based Approaches
45
Revisiting multiple instance neural networks
46
Pointer Sentinel Mixture Models
47
Semi-Supervised Classification with Graph Convolutional Networks
48
The complexity model of communication with computer images
49
Dense Associative Memory for Pattern Recognition
50
Variations and extension of the convex–concave procedure
51
XGBoost: A Scalable Tree Boosting System
52
Associative Long Short-Term Memory
53
Random Point Sets on the Sphere—Hole Radii, Covering, and Separation
54
The SIDER database of drugs and side effects
55
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
56
End-To-End Memory Networks
59
Empowering Multiple Instance Histopathology Cancer Diagnosis by Cell Graphs
60
Neural Machine Translation by Jointly Learning to Align and Translate
61
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
62
Deep learning in neural networks: An overview
63
Dissimilarity-Based Ensembles for Multiple Instance Learning
64
Distributions of angles in random packing on spheres
65
Topological and dynamical complexity of random neural networks.
66
A Bayesian Approach to in Silico Blood-Brain Barrier Penetration Modeling
67
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
68
NIST Handbook of Mathematical Functions
70
On the Convergence of the Concave-Convex Procedure
72
MILES: Multiple-Instance Learning via Embedded Instance Selection
73
An isotropic Gaussian mixture can have more modes than components
74
The Concave-Convex Procedure
75
Storage capacity of attractor neural networks with depressing synapses.
76
Learning with Kernels: support vector machines, regularization, optimization, and beyond
77
The Concave-Convex Procedure (CCCP)
78
Solving the Multiple-Instance Problem: A Lazy Learning Approach
79
Convergence Properties of the Softassign Quadratic Assignment Algorithm
80
A Framework for Multiple-Instance Learning
82
On the Storage Capacity of Nonlinear Neural Networks
83
Solving the Multiple Instance Problem with Axis-Parallel Rectangles
84
A Novel Optimizing Network Architecture with Applications
85
Introduction to the theory of neural computation
86
Dynamics of Discrete Time, Continuous State Hopfield Networks
87
Mining association rules between sets of items in large databases
88
On the number of spurious memories in the Hopfield model
89
The capacity of the Hopfield associative memory
90
Saturation Level of the Hopfield Model for Neural Network
91
Neurons with graded response have collective computational properties like those of two-state neurons.
92
ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM
93
Neural networks and physical systems with emergent collective computational abilities.
94
Analytic theory of the ground state properties of a spin glass. II. XY spin glass
95
Sufficient Conditions for the Convergence of Monotonic Mathematical Programming Algorithms
96
Nonlinear Programming: A Unified Approach.
97
Electra: Pre-training text encoders as discriminators rather than generators
98
Deep multiple instance learning for digital histopathology
99
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
100
Improved Expressivity Through Dendritic Neural Networks
101
Are Random Forests Truly the Best Classifiers?
102
Do we need hundreds of classifiers to solve real world classification problems?
103
INEQUALITIES ON THE LAMBERTW FUNCTION AND HYPERPOWER FUNCTION
104
Support Vector Machines for Multiple-Instance Learning
105
Information Capacity o f the Hopfie ld Mode l
106
Reinforcement Learning: An Introduction
107
Untersuchungen zu dynamischen neuronalen Netzen