Exploiting Scene Depth for Object Detection with Multimodal Transformers

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 430
  • Download : 0
We propose a generic framework MEDUSA (Multimodal Estimated-Depth Unification with Self-Attention) to fuse RGB and depth information using multimodal transformers in the context of object detection. Unlike previous methods that use the depth measured from various physical sensors such as Kinect and Lidar, we show that the depth maps inferred by a monocular depth estimator can play an important role to enhance the performance of modern object detectors. In order to make use of the estimated depth, MEDUSA encompasses a robust feature extraction phase, followed by multimodal transformers for RGB-D fusion. The main strength of MEDUSA lies in its broad applicability for any existing large-scale RGB datasets including PASCAL VOC and Microsoft COCO. Extensive experiments with three datasets show that MEDUSA achieves higher precision than several strong baselines.
Publisher
British Machine Vision Association (BMVA)
Issue Date
2021-11-24
Language
English
Citation

32nd British Machine Vision Conference, pp.1 - 14

URI
http://hdl.handle.net/10203/289599
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0