Familiarity based unified visual attention model for fast and robust object recognition

Cited 30 time in webofscience Cited 0 time in scopus
  • Hit : 812
  • Download : 0
Even though visual attention models using bottom-up saliency can speed up object recognition by predicting object locations, in the presence of multiple salient objects, saliency alone cannot discern target objects from the clutter in a scene. Using a metric named familiarity, we propose a top-down method for guiding attention towards target objects, in addition to bottom-up saliency. To demonstrate the effectiveness of familiarity. the unified visual attention model (UVAM) which combines top-down familiarity and bottom-up saliency is applied to SIFT based object recognition. The UVAM is tested on 3600 artificially generated images containing COIL-100 objects with varying amounts of clutter, and on 126 images of real scenes. The recognition times are reduced by 2.7x and 2x, respectively, with no reduction in recognition accuracy, demonstrating the effectiveness and robustness of the familiarity based UVAM. (C) 2009 Elsevier Ltd. All rights reserved.
Publisher
ELSEVIER SCI LTD
Issue Date
2010-03
Language
English
Article Type
Article
Citation

PATTERN RECOGNITION, v.43, no.3, pp.1116 - 1128

ISSN
0031-3203
DOI
10.1016/j.patcog.2009.07.014
URI
http://hdl.handle.net/10203/95493
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 30 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0