Learning the Relative Importance of Objects from Tagged Images for Retrieval and Cross-Modal Search

Cited 85 time in webofscience Cited 0 time in scopus
  • Hit : 527
  • Download : 0
We introduce an approach to image retrieval and auto-tagging that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results better preserve the aspects a human may find most worth mentioning. We evaluate our approach on three datasets using either keyword tags or natural language descriptions, and quantify results with both ground truth parameters as well as direct tests with human subjects. Our results show clear improvements over approaches that either rely on image features alone, or that use words and image features but ignore the implied importance cues. Overall, our work provides a novel way to incorporate high-level human perception of scenes into visual representations for enhanced image search.
Publisher
SPRINGER
Issue Date
2012-11
Language
English
Article Type
Article
Keywords

SCALE

Citation

INTERNATIONAL JOURNAL OF COMPUTER VISION, v.100, no.2, pp.134 - 153

ISSN
0920-5691
DOI
10.1007/s11263-011-0494-3
URI
http://hdl.handle.net/10203/238877
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 85 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0