A New Approach for Overlay Text Detection and Extraction From Complex Video Scene

Cited 64 time in webofscience Cited 96 time in scopus
  • Hit : 464
  • Download : 1129
DC FieldValueLanguage
dc.contributor.authorKim, Won-Junko
dc.contributor.authorKim, Chang-Ickko
dc.date.accessioned2009-11-09T06:16:41Z-
dc.date.available2009-11-09T06:16:41Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2009-02-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.18, no.2, pp.401 - 411-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/12264-
dc.description.abstractOverlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editor's intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.-
dc.description.sponsorshipThis research was supported by the Ministry of Knowledge Economy, Korea, under the Information Technology Research Center support program supervised by the Institute of Information Technology Advancement (grant number IITA-2008- C1090-0801-0017).en
dc.languageEnglish-
dc.language.isoen_USen
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleA New Approach for Overlay Text Detection and Extraction From Complex Video Scene-
dc.typeArticle-
dc.identifier.wosid000262562600015-
dc.identifier.scopusid2-s2.0-59649097423-
dc.type.rimsART-
dc.citation.volume18-
dc.citation.issue2-
dc.citation.beginningpage401-
dc.citation.endingpage411-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2008.2008225-
dc.contributor.localauthorKim, Chang-Ick-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorOptical character recognition (OCR)-
dc.subject.keywordAuthoroverlay text-
dc.subject.keywordAuthortransition map-
dc.subject.keywordAuthorvideo information retrieval-
dc.subject.keywordAuthorvideo summarization-
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 64 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0