3260 papers • 126 benchmarks • 313 datasets
Music performance rendering is the task of generating human-like performances for written musical scores.
(Image credit: Papersgraph)
These leaderboards are used to track progress in music-performance-rendering-8
No benchmarks available.
Use these libraries to find music-performance-rendering-8 models and implementations
No subtasks available.
This model consists of recurrent neural networks with hierarchical attention and conditional variational autoencoder that takes a sequence of note-level score features extracted from MusicXML as input and predicts piano performance features of the corresponding notes.
This paper designs the model using note-level gated graph neural network and measure-level hierarchical attention network with bidirectional long shortterm memory with an iterative feedback method and applies it for rendering expressive piano performance from the music score.
A novel approach for reconstructing human expressiveness in piano performance with a multi-layer bi-directional Transformer encoder that integrates pianist identities to control the sampling process and explores the ability of the system to model variations in expressiveness for different pianists.
A tokenized representation of symbolic score and performance music, the Score Performance Music tuple (SPMuple), is designed, and a novel way to encode the local performance tempo in a local note time window is validated.
DExter is a new approach leveraging diffusion probabilistic models to render Western classical piano performances that enables the generation of interpretations guided by perceptually meaningful features by being jointly conditioned on score and perceptual-feature representations.
Adding a benchmark result helps the community track progress.