Robotic gesture generation based on a cognitive basis for non-verbal communication

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 28
  • Download : 0
This paper introduces a semantic synthesis method that enables robots to generate human-like gestures by recognizing cognitive and emotional behaviors based on a given situation. Assuming that the human cognitive process is represented as a series of associated events, we proposed a virtually touchable space associated with robotic hands. Additionally, in a humanoid robot, the motions of two arms are considered as a crucial non-verbal communication channel because large spatial changes capture the attention of a human agent. Additionally, virtual spaces related to certain events are described by robotic hands. The concept of virtual spaces is tested with regard to the expression of the robot's cognitive process with a combination of predefined motion sets.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2014-11
Language
English
Citation

2014 11th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2014, pp.683 - 687

ISSN
2325-033X
DOI
10.1109/URAI.2014.7057497
URI
http://hdl.handle.net/10203/314429
Appears in Collection
ME-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0