Augmenting Imbalanced Time-series Data via Adversarial Perturbation in Latent Space

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 36
  • Download : 0
Success of training deep learning models largely depends on the amount and quality of training data. Although numerous data augmentation techniques have already been pro- posed for certain domains such as computer vision where simple schemes such as rotation and flipping have been shown to be effective, other domains such as time-series data have a relatively smaller set of augmentation techniques readily available. Besides, data imbalance is a phenomenon that is often observed in real-world data. However, a simple oversampling may make a model vulnerable to overfitting, so a proper data augmentation is desired. To tackle these problems, we propose a data augmentation method that utilizes latent vectors of an autoencoder in a novel way. When input data is perturbed in its latent space, the reconstructed input data retains similar properties to the original one. On the other hand, adversarial augmentation is a technique to train robust deep neural networks against un- foreseen data shifts or corruptions by providing a downstream model with difficult samples to predict. Our method adversarily perturbs input data in its latent space so that the aug- mented data is diverse and conducive to reducing test error of a downstream model. The experimental results demonstrate that our method achieves a right balance in significantly modifying the input data to help generalization while keeping the realism of it.
Publisher
ACML
Issue Date
2021-11-17
Language
English
Citation

The 13th Asian Conference on Machine Learning, ACML 2021

URI
http://hdl.handle.net/10203/291807
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0