3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in drum-transcription
No benchmarks available.
Use these libraries to find drum-transcription models and implementations
No subtasks available.
This work optimize classifiers for downstream generation by predicting expressive dynamics (velocity) and shows with listening tests that they produce outputs with improved perceptual quality, despite achieving similar results on classification metrics.
Y ourMT3+, a suite of models for enhanced multi-instrument music transcription based on the recent language token decoding approach of MT3, is introduced, enhancing its encoder by adopting a hierarchical attention transformer in the time-frequency domain and integrating a mixture of experts.
This paper proposes a model pruning method based on the lottery ticket hypothesis, modify the original approach to allow for explicitly removing parameters, through structured trimming of entire units, which leads to models which are effectively lighter in terms of size, memory and number of operations.
Adding a benchmark result helps the community track progress.