3260 papers • 126 benchmarks • 313 datasets
Filling in holes in audio data
(Image credit: Papersgraph)
These leaderboards are used to track progress in audio-inpainting-1
No benchmarks available.
Use these libraries to find audio-inpainting-1 models and implementations
No datasets available.
No subtasks available.
GACELA represents a framework capable of integrating future improvements such as processing of more auditory-related features or explicit musical features, and was evaluated in listening tests on music signals of varying complexity and varying gap durations from 375 to 1500 ms.
The paper presents a unified, flexible framework for the tasks of audio inpainting, declipping, and dequantization that is extended to cover analogous degradation models in a transformed domain, e.g. quantization of the signal's time-frequency coefficients.
This paper proposes to structure the spectrogram with nonnegative matrix factorization (NMF) in a probabilistic framework, and derives two expectation-maximization algorithms for estimating the parameters of the model, depending on whether the problem in the time- or time-frequency domain.
For music, the DNN significantly outperformed the reference method, demonstrating a generally good usability of the proposed DNN structure for inpainting complex audio signals like music.
The proposed model outperforms the classical WGAN model and improves the reconstruction of high-frequency content and better results for instruments where the frequency spectrum is mainly in the lower range where small noises are less annoying for human ear and the inpainting part is more perceptible.
Surprisingly, in most cases, such an approximation is shown to provide even better numerical results in audio inpainting compared to its proper counterpart, while being computationally much more effective.
Improving the information reliability of the audio system is critical to safeguarding the security of the audio system. Adversarial samples crafted by in-the-wild attackers by introducing perturbations to the audio become a severe threat to the trustworthiness of deep learning-based classifiers. To achieve dynamic defence against audio adversarial sample attacks, a low-resolution double deep audio waveform prior network (LowDDAWP-Net) for audio systems reliability defence is proposed. Specifically, LowDDAWP-Net consists of a noise audio prior extraction module (<inline-formula><tex-math notation="LaTeX">$\mathbf{DAWP}_{\mathbf{noise}}$</tex-math></inline-formula>), an speech prior extraction module (<inline-formula><tex-math notation="LaTeX">$\mathbf{DAWP}_{\mathbf{speech}}$</tex-math></inline-formula>), a low-resolution extraction module (LREM), and a voice activity detection module (VADM). The role of the VADM is to automatically detect voice activity signals and silent signals from the audio signal. <inline-formula><tex-math notation="LaTeX">$\mathbf{DAWP}_{\mathbf{speech}}$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$\mathbf{DAWP}_{\mathbf{noise}}$</tex-math></inline-formula> are encoder–decoders with the same architecture. The encoder extracts the superficial features of the input audio, and the decoder performs temporal fusion to form high-dimensional features and reconstructs them into waveform signals. A LREM is employed to extract low-resolution audio to facilitate the encoder–decoder to perform detail on low-resolution audio and to speed up the recovery of DAWP networks to high resolution. The adversarial samples generated by several diverse attack state-of-the-art on three different datasets and their corresponding benign samples form a novel private dataset. The qualitative and quantitative results of the novel private dataset demonstrate the effectiveness and superiority of LowDDAWP-Net.
The results show that CQT-Diff outperforms the compared baselines and ablations in audio bandwidth extension and, without retraining, delivers competitive performance against modern baselines in audio inpainting and declipping.
This work presents Msanii, a novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently that combines the expressiveness of mel spectrograms, the generative capabilities of diffusion models, and the vocoding capabilities of neural vocoders.
The proposed method using an unconditionally trained generative model, which can be conditioned in a zero-shot fashion for audio inpainting, and is able to regenerate gaps of any size, can be applied to restoring sound recordings that suffer from severe local disturbances or dropouts, which must be reconstructed.
Adding a benchmark result helps the community track progress.