Efficient video representation learning via masked video modeling with motion-centric token selection동작 중심의 토큰 선택을 통한 효율적 마스크 비디오 표현학습 모델링

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 206
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorHwang, Sung Ju-
dc.contributor.advisor황성주-
dc.contributor.authorHwang, Sunil-
dc.date.accessioned2023-06-22T19:31:30Z-
dc.date.available2023-06-22T19:31:30Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1032334&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/308236-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.2,[iii, 24 p. :]-
dc.description.abstractSelf-supervised Video Representation Learning (VRL) aims to learn transferable representations from uncurated, unlabeled video streams that could be utilized for diverse downstream tasks. With recent advances in Masked Image Modeling (MIM), in which the model learns to predict randomly masked regions in the images given only the visible patches, MIM-based VRL methods have emerged and demonstrated their potential by significantly outperforming previous VRL methods. However, they require an excessive amount of computations due to the added temporal dimension. This is because existing MIM-based VRL methods overlook spatial and temporal inequality of information density among the patches in arriving videos by resorting to random masking strategies, thereby wasting computations on predicting uninformative tokens/frames. To tackle these limitations of Masked Video Modeling, we propose a new token selection method that masks more important tokens according to the object's motions, which we refer to as Motion-centric Token Selection. Further, we present a dynamic frame selection strategy that allows the model to focus on informative and causal frames with minimal redundancy. We validate our method over multiple benchmark and Ego4D datasets, showing that the pre-trained model using our proposed method significantly outperforms state-of-the-art VRL methods on downstream tasks, such as action recognition and object state change classification while largely reducing memory requirements during pre-training and fine-tuning.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectVideo representation learning▼aVideo action recognition▼aObject state change classification-
dc.subject비디오 표현학습▼a비디오 행동 인식▼a물체 상태 변화 분류-
dc.titleEfficient video representation learning via masked video modeling with motion-centric token selection-
dc.title.alternative동작 중심의 토큰 선택을 통한 효율적 마스크 비디오 표현학습 모델링-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthor황선일-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0