Video-guided facial animation비디오 기반 얼굴 애니메이션

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 331
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorShin, Sung-Yong-
dc.contributor.advisor신성용-
dc.contributor.authorYoo, Sang-Wook-
dc.contributor.author유상욱-
dc.date.accessioned2011-12-13T06:07:19Z-
dc.date.available2011-12-13T06:07:19Z-
dc.date.issued2008-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=297256&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/34811-
dc.description학위논문(석사) - 한국과학기술원 : 전산학전공, 2008.2, [ vi, 29 p. ]-
dc.description.abstractIn this thesis, we present a novel method for synthesizing a realistic facial animation taking as input a single video which captures various facial expressions and the 3D template mesh for a head model. In the capture process, we do not employ any markers or structured light on the face as it obstructs the natural performance of the actor or actress. Our basic idea is to use the temporal coherency between consecutive frames and model the facial skin physically to drive the 3D facial animation. To do that, we first estimate the camera matrix using interactively specified corresponding point pairs between the given mesh and the image at the initial frame. At the same time, the temporal coherency between all pairs of consecutive images is estimated using optical flow algorithm. Then, the head motion is estimated to extract the local motion from the optical flow. We refine the given template mesh using the local motion with modeling a facial skin. After that, we apply the estimated head motion to the template mesh for a natural animation. To estimate the error of deformed mesh, we first synthesize an image by projecting the texture-mapped mesh onto the image plane. The error map is constructed by subtracting the synthesized image from the image of the next frame. Then, we locally compute the optical flow using the error map. Only the part of the mesh is refined again where the optical flow is updated. We iterate this process until the error reduces below the predefined threshold or with fixed number of times.eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectFacial Animation-
dc.subjectOptical Flow-
dc.subjectTriangular Finite Element-
dc.subject얼굴 애니메이션-
dc.subject광흐름-
dc.subject삼각형 유한 요소-
dc.subjectFacial Animation-
dc.subjectOptical Flow-
dc.subjectTriangular Finite Element-
dc.subject얼굴 애니메이션-
dc.subject광흐름-
dc.subject삼각형 유한 요소-
dc.titleVideo-guided facial animation-
dc.title.alternative비디오 기반 얼굴 애니메이션-
dc.typeThesis(Master)-
dc.identifier.CNRN297256/325007 -
dc.description.department한국과학기술원 : 전산학전공, -
dc.identifier.uid020063329-
dc.contributor.localauthorShin, Sung-Yong-
dc.contributor.localauthor신성용-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0