3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in time-series-generation-9
No benchmarks available.
Use these libraries to find time-series-generation-9 models and implementations
No datasets available.
No subtasks available.
This work proposes a Recurrent GAN (RGAN) and Recurrent Conditional GGAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data.
This work proposes GeneRAting TIme Series with diverse and controllable characteristics, named GRATIS, with the use of mixture autoregressive (MAR) models, and demonstrates the usefulness of the time series generation process through a time series forecasting application.
Generative adversarial networks (GANs) are recently highly successful in generative applications involving images and start being applied to time series data. Here we describe EEG-GAN as a framework to generate electroencephalographic (EEG) brain signals. We introduce a modification to the improved training of Wasserstein GANs to stabilize training and investigate a range of architectural choices critical for time series generation (most notably up- and down-sampling). For evaluation we consider and compare different metrics such as Inception score, Frechet inception distance and sliced Wasserstein distance, together showing that our EEG-GAN framework generated naturalistic EEG examples. It thus opens up a range of new generative application scenarios in the neuroscientific and neurological context, such as data augmentation in brain-computer interfacing tasks, EEG super-sampling, or restoration of corrupted data segments. The possibility to generate signals of a certain class and/or with specific properties may also open a new avenue for research into the underlying structure of brain signals.
A novel generator, called the conditional AR-FNN, is developed, designed to capture the temporal dependence of time series and can be efficiently trained, and consistently and significantly outperforms state-of-the-art benchmarks with respect to measures of similarity and predictive ability.
This work proposes a novel architecture for synthetically generating time-series data with the use of Variational Auto-Encoders (VAEs) that can incorporate domain-specific time-patterns such as polynomial trends and seasonalities to provide interpretable outputs.
TimeVQVAE is proposed, the first work, to the knowledge, that uses vector quantization (VQ) techniques to address the TSG problem, and the priors of the discrete latent spaces are learned with bidirectional transformer models that can better capture global temporal consistency.
It is demonstrated how standard GANs may give rise to non-parsimonious input-output maps that are sensitive to perturbations, which motivates the need for constraints and regularisation on GAN generators.
The SigWGAN is developed by combining continuous-time stochastic models with the newly proposed signature W1 metric, which allows turning computationally challenging GAN min-max problem into supervised learning while generating high fidelity samples.
This work combines the disciplines of GAN-based data augmentation and scenario forecasting, filling the gap in the generation of synthetic data in DCs, and proposes a methodology to increase the variability and heterogeneity of the generated data by introducing on-demand anomalies without additional effort or expert knowledge.
Adding a benchmark result helps the community track progress.