3260 papers • 126 benchmarks • 313 datasets
A related task to sentiment analysis is the subjectivity analysis with the goal of labeling an opinion as either subjective or objective.
(Image credit: Papersgraph)
These leaderboards are used to track progress in subjectivity-analysis-8
Use these libraries to find subjectivity-analysis-8 models and implementations
No subtasks available.
It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.
EDA consists of four simple but powerful operations: synonym replacement, random insertion, random swap, and random deletion, which shows that EDA improves performance for both convolutional and recurrent neural networks.
This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinion-level Sentiment Intensity dataset (MOSI), which is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, andper-milliseconds annotated audio features.
This paper demonstrates a counter-intuitive, postprocessing technique -- eliminate the common mean vector and a few top dominating directions from the word vectors -- that renders off-the-shelf representations even stronger.
This work proposes three strategies to stabilize the dynamic routing process to alleviate the disturbance of some noise capsules which may contain “background” information or have not been successfully trained.
A computer-assisted literature review, where the roots of sentiment analysis are in the studies on public opinion analysis at the beginning of 20th century and in the text subjectivity analysis performed by the computational linguistics community in 1990's, and the top-20 cited papers from Google Scholar and Scopus are presented.
The properties of byte-level recurrent language models are explored and a single unit which performs sentiment analysis is found which achieves state of the art on the binary subset of the Stanford Sentiment Treebank.
The Gated Multimodal Embedding LSTM with Temporal Attention model is proposed that is composed of 2 modules and able to perform modality fusion at the word level and is able to better model the multimodal structure of speech through time and perform better sentiment comprehension.
This paper conducts a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models, and proposes two additional pooling strategies over learned word embeddings: a max-pooling operation for improved interpretability and a hierarchical pooling operation, which preserves spatial information within text sequences.
A new approach to reformulate potential NLP task into an entailment one, and then fine-tune the model with as little as 8 examples, which improves the various existing SOTA few-shot learning methods by 12\%, and yields competitive few- shot performance with 500 times larger models, such as GPT-3.
Adding a benchmark result helps the community track progress.