3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in music-emotion-recognition-10
No benchmarks available.
Use these libraries to find music-emotion-recognition-10 models and implementations
No subtasks available.
This work merges audioLIME -- a source-separation based explainer -- with mid-level perceptual features, thus forming an intuitive connection chain between the input audio and the output emotion predictions, and demonstrates the usefulness of this method by applying it to debug a biased emotion prediction model.
This work reproduces the implementation of traditional feature engineering based approaches and proposes a new model based on deep learning that outperforms classical models on the arousal detection task, and shows that both approaches perform equally on the valence prediction task.
It is found that ensembling representations trained with different training lengths can improve tagging results significantly, which suggest a possible direction to explore incorporating multiple temporal resolutions in the network architecture for future work.
This study uses the transformer-based approach model using XLNet as the base architecture which, till date, has not been used to identify emotional connotations of music based on lyrics to enhance web-crawlers' accuracy for extracting lyrics.
It is found that out of the 11 high-level song features, mainly 5 contribute to the performance, multi-modal features do better than audio alone when predicting valence.
Adding a benchmark result helps the community track progress.