We present an approach to extract high quality temporally coherennt alpha mattes of objects from a video. Our approach extends the conventional matting approach, i.e. closed form matting, to video by using multi-frame nonlocal matting Laplacian. Our multi-frame nonlocal matting Laplacian is defifined over a nonlocal neighborhood in the spatio-temporal domain of video by evaluating color and texture similarity between pixels in a video sequence. Hence, the temporal coherence is explicitly encoded in the multi-frame nonlocal matting Laplacian. We have also proposed methods for speeding up the video matting process and applying the nonlocal mean regularization to further enhance the temporal coherence. Our proposed approach is robust even for the scenes with complicated background. We demonstrate the effectiveness of our proposed approach on various examples with qualitative comparisons
to the results from the previous matting algorithms.