Learning Spatio-temporally Invariant Representations from Video

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 269
  • Download : 0
Learning invariant representations of environments through experience has been important area of research both in the field of machine learning as well as in computational neuroscience. In the present study, we propose a novel unsupervised method for the discovery of invariants from a single video input based on the learning of the spatio-temporal relationship of inputs. In an experiment, we tested the learning of spatio-temporal invariant features from a single video that involves rotational movements of faces of several subjects. From the results of this experiment, we demonstrate that the proposed system for the learning of invariants based on spatio-temporal continuity can be used as a compelling unsupervised method for learning invariants from an input that includes temporal information.
Publisher
IEEE World Congress on Computational Intelligence
Issue Date
2012-06-10
Language
English
Citation

IEEE World Congress on Computational Intelligence, pp.1 - 6

URI
http://hdl.handle.net/10203/169471
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0