Terrain reconstruction from images is an ill-posed, yet commonly desired Structure from Motion
task when compositing visual effects into live-action photography. These surfaces are required
for choreography of a scene, casting physically accurate shadows of CG elements, and occlu-
sions. We present a novel framework for generating the geometry of landscapes from extremely
noisy point cloud datasets obtained via limited resolution techniques, particularly optical flow
based vision algorithms applied to live-action video plates. Our contribution is a new statistical
approach to remove erroneous tracks (‘outliers’) by employing a unique combination of well estab-
lished techniques—including Gaussian Mixture Models (GMMs) for robust parameter estimation
and Radial Basis Functions (RBFs) for scattered data interpolation—to exploit the natural con-
straints of this problem. Our algorithm offsets the tremendously laborious task of modeling these
landscapes by hand, automatically generating a visually consistent, camera position dependent,
thin-shell surface mesh within seconds for a typical tracking shot.