Self-organization of Object Features Representing Motion Using Multiple Timescales Recurrent Neural Network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 285
  • Download : 5
Affordance theory suggests that humans recognize the environment based on invariants. Invariants are features that describe the environment offering behavioral information to humans. Two types of invariants exist, structural invariants and transformational invariants. In our previous paper, we developed a method that self-organizes transformational invariants, or motion features, from camera images based on robot's experiences. The model used a bi-directional technique combining a recurrent neural network for dynamics learning, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network for feature extraction. The bi-directional training method developed in the previous work was effective in clustering the motion of objects, but the analysis did not give good segregation results of the self-organized features (transformational invariants) among different motion types. In this paper, we present a refined model which integrates dynamics learning and feature extraction in a single model. The refined model is comprised of Multiple Timescales Recurrent Neural Network (MTRNN), which possesses better learning capability than RNNPB. Self-organization result of four types of motions have proved the model's capability to create clusters of object motions. The analysis showed that the model extracted feature sequences with different characteristics for four object motion types.
Publisher
Institute of Electrical and Electronics Engineers(IEEE)
Issue Date
2012-06-12
Language
English
Citation

2012 IJCNN International Joint Conference on Neural Networks, pp.1 - 8

ISBN
978-1-4673-1488-6
ISSN
2161-4393
URI
http://hdl.handle.net/10203/173043
Appears in Collection
EE-Conference Papers(학술회의논문)

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0