An automated end-to-end pipeline for fine-grained video annotation using deep neural networks

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 50
  • Download : 0
The searchability of video content is often limited to the descriptions authors and/or annotators care to provide. The level of description can range from absolutely nothing to fine-grained annotations at the level of frames. Based on these annotations, certain parts of the video content are more searchable than others. Within the context of the STEAMER project, we developed an innovative end-to-end system that attempts to tackle the problem of unsupervised retrieval of news video content, leveraging multiple information streams and deep neural networks. In particular, we extracted keyphrases and named entities from transcripts, subsequently refining these keyphrases and named entities based on their visual appearance in the news video content. Moreover, to allow for fine-grained frame-level annotations, we temporally located high-confidence keyphrases in the news video content. To that end, we had to tackle challenges such as the automatic construction of training sets and the automatic assessment of keyphrase imageability. In this paper, we discuss the main components of our end- To-end system, capable of transforming textual and visual information into fine-grained video annotations.
Publisher
ACM (Association for Computing Machinery)
Issue Date
2016-06
Language
English
Citation

6th ACM International Conference on Multimedia Retrieval, ICMR 2016, pp.409 - 412

DOI
10.1145/2911996.2912028
URI
http://hdl.handle.net/10203/313033
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0