3260 papers • 126 benchmarks • 313 datasets
Understanding the meaning of text by composing the meanings of the individual words in the text (Source: https://arxiv.org/pdf/1405.7908.pdf)
(Image credit: Papersgraph)
These leaderboards are used to track progress in semantic-composition-8
No benchmarks available.
Use these libraries to find semantic-composition-8 models and implementations
No subtasks available.
The Lifted Matrix-Space model is introduced, which uses a global transformation to map vector word embeddings to matrices, which can be composed via an operation based on matrix-matrix multiplication, and is found to consistently outperforms TreeLSTM, the previous best known composition function for tree-structured models.
This paper proposes a formal and general way to quantify the importance of each word and phrase and proposes Sampling and Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC) algorithm, which outperform prior hierarchical explanation algorithms.
SentiBERT is better than baseline approaches in capturing negation and the contrastive relation and model the compositional sentiment semantics, and can be transferred to other sentiment analysis tasks as well as related tasks, such as emotion classification tasks.
Semantically Proportional Mixing (SnapMix) is proposed that exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data and consistently outperforms existing mixed-based approaches regardless of different datasets or network depths.
A dual-space model is introduced that matches the performance of the best previous models for relations and compositions and can model relations, compositions, and other aspects of semantics.
This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, it design a path-constraint resource allocation algorithm to measure the reliability of relation paths and (2) represents relation paths via semantic composition of relation embeddings.
The results show that a system based on a reasonably-sized semantic lexicon and a manageable number of non-first-order axioms enables efficient logical inferences, including those concerned with generalized quantifiers and intensional operators, and outperforms the state-of-the-art firstorder inference system.
This paper shows that distributional inference improves sparse word representations on several word similarity benchmarks and demonstrates that the model is competitive with the state-of-the-art for adjective-noun, noun- noun and verb-object compositions while being fully interpretable.
Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics.
Adding a benchmark result helps the community track progress.