DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Semin | ko |
dc.contributor.author | Choi, Jaeyoung | ko |
dc.contributor.author | Han, Seungwan | ko |
dc.contributor.author | Ro, YongMan | ko |
dc.date.accessioned | 2014-09-04T08:41:06Z | - |
dc.date.available | 2014-09-04T08:41:06Z | - |
dc.date.created | 2014-07-01 | - |
dc.date.created | 2014-07-01 | - |
dc.date.created | 2014-07-01 | - |
dc.date.issued | 2014-08 | - |
dc.identifier.citation | SIGNAL PROCESSING-IMAGE COMMUNICATION, v.29, no.7, pp.788 - 806 | - |
dc.identifier.issn | 0923-5965 | - |
dc.identifier.uri | http://hdl.handle.net/10203/190139 | - |
dc.description.abstract | In this paper, we propose a new and novel modality fusion method designed for combining spatial and temporal fingerprint information to improve video copy detection performance. Most of the previously developed methods have been limited to use only pre-specified weights to combine spatial and temporal modality information. Hence, previous approaches may not adaptively adjust the significance of the temporal fingerprints that depends on the difference between the temporal variances of compared videos, leading to performance degradation in video copy detection. To overcome the aforementioned limitation, the proposed method has been devised to extract two types of fingerprint information: (1) spatial fingerprint that consists of the signs of DCT coefficients in local areas in a keyframe and (2) temporal fingerprint that computes the temporal variances in local areas in consecutive keyframes. In addition, the so-called temporal strength measurement technique is developed to quantitatively represent the amount of the temporal variances; it can be adaptively used to consider the significance of compared temporal fingerprints. The experimental results show that the proposed modality fusion method outperforms other state-of-the-arts fusion methods and popular spatio-temporal fingerprints in terms of video copy detection. Furthermore, the proposed method can save 39.0%, 25.1%, and 46.1% time complexities needed to perform video fingerprint matching without a significant loss of detection accuracy for our synthetic dataset, TRECVID 2009 CCD Task, and MUSCLE-VCD 2007, respectively. This result indicates that our proposed method can be readily incorporated into the real-life video copy detection systems. (C) 2014 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.publisher | ELSEVIER SCIENCE BV | - |
dc.title | Adaptive weighted fusion with new spatial and temporal fingerprints for improved video copy detection | - |
dc.type | Article | - |
dc.identifier.wosid | 000339600600006 | - |
dc.identifier.scopusid | 2-s2.0-84904046366 | - |
dc.type.rims | ART | - |
dc.citation.volume | 29 | - |
dc.citation.issue | 7 | - |
dc.citation.beginningpage | 788 | - |
dc.citation.endingpage | 806 | - |
dc.citation.publicationname | SIGNAL PROCESSING-IMAGE COMMUNICATION | - |
dc.identifier.doi | 10.1016/j.image.2014.05.002 | - |
dc.contributor.localauthor | Ro, YongMan | - |
dc.contributor.nonIdAuthor | Han, Seungwan | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Video copy detection | - |
dc.subject.keywordAuthor | Video sequence matching | - |
dc.subject.keywordAuthor | Modality fusion | - |
dc.subject.keywordAuthor | Video fingerprint | - |
dc.subject.keywordAuthor | Weighted adaptive fusion | - |
dc.subject.keywordAuthor | Spatial and temporal information | - |
dc.subject.keywordPlus | ROBUST | - |
dc.subject.keywordPlus | IDENTIFICATION | - |
dc.subject.keywordPlus | DESCRIPTORS | - |
dc.subject.keywordPlus | EFFICIENT | - |
dc.subject.keywordPlus | FEATURES | - |
dc.subject.keywordPlus | SURF | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.