Although 3D volumetric reconstruction of an object from given multiple photographic images is already an old and challenging task in computer vision and computer graphics, it remains one of the most difficult problems. Generally, the volumetric structure of a dynamically changing scene can be reconstructed to a certain degree if its material characteristics, illumination, and geometric constraints are carefully considered in the case that we take pictures of a static scene using moving cameras or of moving objects in a scene using multiple fixed cameras.
In this paper, we propose a new method that processes multiple synchronized image sequences taken from several cameras and generates a 3D rendered scene of dynamically moving objects. We construct and shade a 3D model of the object from silhouette images by combining the image-based visual hull method and view morphing method which is the basis of image-based rendering.
The proposed hybrid method improves the speed and the quality of the previous visual hull based 3D reconstruction method proposed by Matusik. It provides an efficient image based 3D scene reconstruction scheme, which renders dynamically changing real-world scenes in real time and provides a silhouette extraction scheme that is robust to illumination change. The system based on this method is relatively low-cost and does not require any special hardware device or specific environment.
In the experiment we take pictures of a person making some gestures using 4 cameras and generate the 3D model and images of the person at a new viewpoint. For the purpose of acquiring high-quality 3D data in real-time, we speed up our system by using a line caching mechanism and the hybrid method of combining the visual hull method and the image-based rendering, while we enhance the accuracy of 3D result by calculating intersection points with epipolar lines exploiting the curved form of silhouette boundaries. The experimental result shows that our method enha...