A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-Based Variational Autoencoder

Cited 0 time in webofscience Cited 218 time in scopus
  • Hit : 360
  • Download : 1087
The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2018-07
Language
English
Article Type
Article
Citation

IEEE Robotics and Automation Letters, v.3, no.3, pp.1544 - 1551

ISSN
2377-3766
DOI
10.1109/LRA.2018.2801475
URI
http://hdl.handle.net/10203/277321
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
2-s2.0-85054490232.pdf(1.34 MB)Download

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0