(A) study on audiovisual deep features for video categorization = 영상 분류를 위한 시청각적 심층 특징에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 276
  • Download : 0
Over the last few decades, many papers about video categorization have been published. Despite of rich information in videos, previous algorithms for video categorization mainly rely on fusing multiple visual features including static and motion information. On the other words, the previous models does not utilize the audio information. In this paper, we propose a framework of video categorization which utilize both visual and auditory information from given videos and investigate diffierent types of deep features. The framework consists of feature extractor for each modality and fusion to generate audiovisual feature. For visual feature, we fine-tuned the AlexNet to obtain better discriminative features and measured the performance. Two methods are used and evaluated for capturing audio information from videos, 1D-CNN and bag of word representation. The highest mean average precision scores are achieved audiovisual features which are consists of fine-tuned AlexNet and bag of word representation for MFCCs. From the results, we proved audiovisual features help to categorize videos without any degeneration of performance.
Advisors
Yoo, Chang Dongresearcher유창동researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2016
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2016.2 ,[iv, 25 p. :]

Keywords

Video Categorization; Deep Learning; Audiovisual; Multi-modal; Convolutional Neural Network; 영상분류; 심화학습; 시청각; 멀티모달; 컨볼루션 신경망

URI
http://hdl.handle.net/10203/221765
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=649575&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0