In this thesis, we present a new approach for on-line performance-driven facial animation. Our basic idea is capturing the facial motion from a real performer and adapting it to a virtual character in real-time. For this purpose, we address three issues: facial expression capture, facial example construction and facial animation.
We first propose a comprehensive solution for real-time facial expression capture without any devices such as head-mounted cameras and face-attached markers. Our scheme first captures the 2D facial features and 3D head motion exploiting anthropometrie knowledge and then captures their time-varying 3D positions only due to facial expression. We adopt a Kalman filter to track the 3D features guided by their captured 2D positions while correcting their drift due to 3D head motion as well as removing noises.
For real-time facial animation, we incorporate multiple face models, called "facial examples". Each of these examples reflects both a facial expression of different types and designer``s insight to be a good guideline for animation. However, it is difficult to achieve smooth local deformation and non-uniform blending without any tools since a facial example consists of a set of vertices with few handles. In order to facilitate the local control of facial features, we present a method for extracting a set of wire curves[47] and deformation parameters from a facial example. The ex-tracted wire curves are used both for effectively representing a facial example and for deforming a face model.
Provided with those facial examples, we present a novel example-based approach for creating facial expressions of model to mimic those of face performer. The main issue is to find the best blend of facial examples according to the expression of face performer. Our scheme predefines the weight functions for the parameterized target examples based on radial basis functions. At runtime, given an input expression of face performer, we evaluate the blendin...