As facial animation databases have been becoming rich, synthesis of facial animations by reusing existing animations has naturally attracted much attention in computer graphics. Keeping pace with this trend, we present a novel data-driven approach to facial animation consisting of expression cloning and expression capturing. The input to expression cloning can be either two sets of corresponding key models or two independent example animations for source and target face models. Given the source and target key models, we automatically extract a set of coherently-moving regions containing facial features to transfer the source face expressions into the target face model region by region. We skip the key model preparation in blend shape-based facial expression cloning by solving the following problem: Given an input animation together with two independent example animations for source and target face models, respectively, transfer the facial expressions in the input animation frame by frame from the source face model to the target face model. In order to provide the input example animations to expression cloning, we present a vision-based approach to capturing human face movements while improving time complexity for stereo matching and adopting elasticity theory for skin deformations.