3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in tensor-networks-9
No benchmarks available.
Use these libraries to find tensor-networks-9 models and implementations
No datasets available.
No subtasks available.
It is demonstrated how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images.
This work trains two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multi-scale entanglement renormalization ansatz, and introduces mathematical connections among quantum many-body physics, quantum information theory, and machine learning.
This work describes a tree tensor network (TTN) algorithm for approximating the ground state of either a periodic quantum spin chain or a lattice model on a thin torus, and implements the algorithm using TensorNetwork.
This work proposes extensions for the Dynamic Memory Network, specifically within the attention mechanism, and calls the resulting Neural Architecture as Dynamic Memory Tensor Network (DMTN), which results in over 80% improvement in the number of task passed against the baselined standard DMN.
The use of the TensorNetwork API is demonstrated with applications both physics and machine learning, with details appearing in companion papers.
Logic Tensor Networks are used, a novel Statistical Relational Learning framework that exploits both the similarities with other seen relationships and background knowledge, expressed with logical constraints between subjects, relations and objects, to perform zero-shot learning.
This work focuses on the class of Tensor Networks, which has been a work horse for physicists in the last two decades to analyse quantum many-body systems, and proposes adaptions for 2D images using classical image domain concepts such as local orderlessness of images.
Faster-LTN, an object detector composed of a convolutional backbone and an LTN, is proposed, which is the first attempt to combine both frameworks in an end-to-end training setting.
RWFNs are used to perform Visual Relationship Detection tasks, which are more challenging SII tasks and show that RWFNs outperform LTNs for the predicate-detection task while using fewer number of adaptable parameters (1:56 ratio).
Adding a benchmark result helps the community track progress.