A view-based multiple objects tracking and human action recognition for interactive virtual environments

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 384
  • Download : 3
DC FieldValueLanguage
dc.contributor.authorChoi, Jinko
dc.contributor.authorCho, Yong-ilko
dc.contributor.authorBae, Sujungko
dc.contributor.authorYang, Hyun-Seungko
dc.contributor.authorCho, Kyusungko
dc.date.accessioned2010-02-17T05:57:05Z-
dc.date.available2010-02-17T05:57:05Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2008-
dc.identifier.citationTHE INTERNATIONAL JOURNAL OF VIRTUAL REALITY, v.7, no.3, pp.71 - 76-
dc.identifier.urihttp://hdl.handle.net/10203/16663-
dc.description.abstractAs environments become smart in accordance with advances in ubiquitous computing technology, researchers are struggling to satisfy users’ diverse and sophisticated demands. The aim of the present work is to enable multiple persons in a interactive virtual environment to simultaneously and conveniently interact with virtual agents. To this end, we propose a real-time system that robustly tracks multiple persons in virtual environments and recognizes their actions through image sequences acquired from a single fixed camera. The proposed system is compromised of three components: blob extraction, object tracking, and human action recognition. Given an image, we extract blobs using the Mixture of Gaussians algorithm with a hierarchical data structure and we additionally remove shadows and highlights in order to obtain a more accurate object silhouette. We then track multiple objects using a motion-based object model and an inference graph for handling grouping and fragment problems. Finally, we model an action as a Motion History Image (MHI) based on given object tracks, normalize the MHI, reduce the MHI using PCA, and classify an action using a multi-layer perceptron. To evaluate the performance of the proposed system, we employed it in an augmented reality application where multiple persons can interact with a virtual pet. The results confirm that reliable object tracking is achieved and multiple persons’ actions can be recognized for applications in interactive virtual environments.-
dc.description.sponsorshipThis research is supported by Foundation of ubiquitous computing and networking project (UCN) Project, the Ministry of Knowledge Economy (MKE) 21st Century Frontier R&D Program in Korea and a result of subproject UCN 08B3-O4-10M.en
dc.languageEnglish-
dc.language.isoen_USen
dc.publisherIPI Press-
dc.titleA view-based multiple objects tracking and human action recognition for interactive virtual environments-
dc.typeArticle-
dc.type.rimsART-
dc.citation.volume7-
dc.citation.issue3-
dc.citation.beginningpage71-
dc.citation.endingpage76-
dc.citation.publicationnameTHE INTERNATIONAL JOURNAL OF VIRTUAL REALITY-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorYang, Hyun-Seung-
dc.contributor.nonIdAuthorChoi, Jin-
dc.contributor.nonIdAuthorCho, Yong-il-
dc.contributor.nonIdAuthorBae, Sujung-
dc.contributor.nonIdAuthorCho, Kyusung-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0