This paper introduces a semantic synthesis method that enables robots to generate human-like gestures by recognizing cognitive and emotional behaviors based on a given situation. Assuming that the human cognitive process is represented as a series of associated events, we proposed a virtually touchable space associated with robotic hands. Additionally, in a humanoid robot, the motions of two arms are considered as a crucial non-verbal communication channel because large spatial changes capture the attention of a human agent. Additionally, virtual spaces related to certain events are described by robotic hands. The concept of virtual spaces is tested with regard to the expression of the robot's cognitive process with a combination of predefined motion sets.