3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in deep-clustering-20
Use these libraries to find deep-clustering-20 models and implementations
This work presents DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features and outperforms the current state of the art by a significant margin on all the standard benchmarks.
Preliminary experiments on single-channel mixtures from multiple speakers show that a speaker-independent model trained on two-speaker mixtures can improve signal quality for mixtures of held-out speakers by an average of 6dB, and the same model does surprisingly well with three-speakers mixtures.
This work quantitatively shows across a range of image and time-series datasets that the proposed method has competitive performance against the latest deep clustering algorithms, including outperforming current state-of-the-art on several.
This paper significantly improves upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline, and produces unprecedented performance on a challenging speech separation.
This work presents a novel way to fit self-organizing maps with probabilistic cluster assignments, PSOM, a new deep architecture for Probabilistic clustering, DPSOM, and its extension to time series data, T-DPSOM, which achieve superior clustering performance compared to current deep clustering methods on static MNIST/Fashion-MNIST data as well as medical time series, while also inducing an interpretable representation.
A Structural Deep Clustering Network (SDCN) is proposed to integrate the structural information into deep clustering, with a delivery operator to transfer the representations learned by autoencoder to the corresponding GCN layer, and a dual self-supervised mechanism to unify these two different deep neural architectures and guide the update of the whole model.
This work describes the proposed method as Structurally Regularized Deep Clustering (SRDC), where it enhances target discrimination with clustering of intermediate network features, and enhance structural regularization with soft selection of less divergent source examples.
A novel neural network model that uses a dissimilarity function to generalize a family of density estimation and clustering methods and can leverage from deep representation learning due to its straightforward incorporation into deep learning architectures.
Adding a benchmark result helps the community track progress.