We propose a fusion method that simultaneously captures spatially-varying BRDF and geometry of an object in motion using a single RGBD camera. State-of-the-art fusion methods capture geometry and often diffuse albedo, but no specular reflectance. Traditional SVBRDF acquisition methods capture many input images to obtain dense samples of viewing and lighting angles using a single camera or a multiview setup, followed by an offline process of inverse rendering which often takes several hours. In this thesis, we introduce a novel fusion method that can capture SVBRDF, geometry, and motion of dynamic objects in an interactive manner. Based on observation under both environment illumination and infrared light of an RGBD camera, we introduce a novel online optimization method that jointly estimates both diffuse and specular appearance parameters in addition to geometry and motion per frame, by accumulating photometric observations in a half-angle buffer in a voxel grid. We progressively refine dynamic geometry and spatially-varying appearance in clusters, enabling interactive online feedback at an averaged speed of 880\,msec.~per frame. Our experimental results of various dynamic objects that include the mixture of diffuse and specular reflection validate our fusion method's robust performance in estimating both SVBRDF and geometry in motion using a single RGBD camera.