3260 papers • 126 benchmarks • 313 datasets
Predicting prosodic prominence from text. This is a 2-way classification task, assigning each word in a sentence a label 1 (prominent) or 0 (non-prominent). ( Image credit: Helsinki Prosody Corpus )
(Image credit: Papersgraph)
These leaderboards are used to track progress in prosody-prediction-27
Use these libraries to find prosody-prediction-27 models and implementations
No subtasks available.
A new natural language processing dataset and benchmark for predicting prosodic prominence from written text and shows that pre-trained contextualized word representations from BERT outperform the other models even with less than 10% of the training data.
A new evaluation framework, “SUPERB-prosody,” is presented, consisting of three prosody-related downstream tasks and two pseudo tasks, which concludes that SSL speech models are highly effective for prosodic-related tasks.
Adding a benchmark result helps the community track progress.