In this thesis, a region-based approach for cloning facial expressions of a source model to a target model, using predefined key-models, is presented. With the models segmented into regions, the regions can be cloned individually, using the key-shapes of the region acquired from the key-models. Since the final expressions are obtained by combining the cloned regions, the region-based approach allows complex expressions to be cloned with a small number of key-models. The region based approach adopted in this thesis consists two stages: the preprocessing and the synthesis stages. In the preprocessing stage, which is carried out once at the beginning, the models are automatically segmented into three regions using the key-models. Once the regions are segmented, the target key-shapes of each region are parameterized using the corresponding source key-shapes. In the synthesis stage, a cloned target shape for each region is generated by blending the target key-shapes, and these shapes are combined to generate the final target expression for each frame of the input animation in runtime. In the resulting animations, the source model`s complex expressions are convincingly cloned to the target model using a small number of key-models. This approach provides real-time performance.