Motion retargeting reduces the animator's efforts when creating robot motion by adapting human motion. However, it still requires several manual landmark placements to achieve satisfactory whole-body retargeting. Therefore, to reduce efforts on placing landmarks for corresponding minor body parts, this dissertation first proposes the volumetric pose retargeting method for a general humanoid robot that considers body shape similarity in addition to traditional landmark-based similarity. An additional strategy that matches the volumetric distribution of the body shape between a human and a robot is presented as guidance to handle redundancy from fewer landmarks and to force consistent outcomes from ambiguous landmark placements. A kinematically constrained Gaussian mixture model originally used as a volumetric model-based human tracking method is adapted and modified to manage both the shape and the landmarks in the proposed method. The shape and landmark similarity metrics are respectively introduced, and the overall similarity metric is defined as the sum of both metrics with weighting coefficients to control the balance between the two policies by animators. Then, expectation-maximization based optimization is utilized to calculate the target robot angles with human demonstration frame-by-frame. The rigged pointset is proposed by extending the concept of rigged mesh and volumetric motion retargeting with the modification of the proposed volumetric pose retargeting to maintain motion smoothness for retargeting. From maintaining the initial shape correspondence during the retargeting motion, volumetric motion retargeting achieves temporal continuity, robustness in self-occlusion, and reduced computational cost. Both methods are validated by experimental results that demonstrate the effectiveness of body shape matching, controllability through weighting coefficients, and generality on applying different humanoid robots.