This thesis addresses a method to interactively manipulate 3D facial expressions by dragging the facial elements of a 2D face model such as eyebrow, eyelid, or lip. Adopting a scattered data interpolation technique, our approach consists of two parts: analysis of face key-models and synthesis of facial expressions. In the analysis part carried out once at the beginning, both 2D and 3D key-models are automatically segmented into three regions, each containing one of three facial features, that is, the left eye, the right eye, and the mouth, which give rise to three sets of 2D control key-shapes and their corresponding sets of 3D subject key-shapes. Using the key-shapes of each 2D face feature, those of the corresponding 3D face feature are parameterized. In the synthesis part, given a sequence of 2D face models reflecting the animator``s intension, the three 3D subject output features are obtained separately by blending their own 3D key-shapes. These separately-produced features are combined to synthesize the 3D
output face model at each frame. Our approach enables intuitive manipulation of 3D facial expressions with 2D interface while exhibiting an on-line, real-time performance.