3260 papers • 126 benchmarks • 313 datasets
Given the transcript of a conversation along with speaker information of each constituent utterance, the ERC task aims to identify the emotion of each utterance from several pre-defined emotions. Formally, given the input sequence of N number of utterances [(u1, p1), (u2, p2), . . . , (uN , pN )], where each utterance ui = [ui,1, ui,2, . . . , ui,T ] consists of T words ui,j and spoken by party pi, the task is to predict the emotion label ei of each utterance ui. .
(Image credit: Open Source)
These leaderboards are used to track progress in emotion-recognition-in-conversation-50
Use these libraries to find emotion-recognition-in-conversation-50 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.