DVPE: Divided View Position Embedding for Multi-View 3D Object Detection
DVPE: Divided View Position Embedding for Multi-View 3D Object Detection
Jiasen Wang, Zhenglin Li, Ke Sun, Xianyuan Liu, Yang Zhou
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6877-6885.
https://doi.org/10.24963/ijcai.2024/760
Sparse query-based paradigms have achieved significant success in multi-view 3D detection for autonomous vehicles. Current research faces challenges in balancing between enlarging receptive fields and reducing interference when aggregating multi-view features. Moreover, different poses of cameras present challenges in training global attention models. To address these problems, this paper proposes a divided view method, in which features are modeled globally via the visibility cross-attention mechanism, but interact only with partial features in a divided local virtual space. This effectively reduces interference from other irrelevant features and alleviates the training difficulties of the transformer by decoupling the position embedding from camera poses. Additionally, 2D historical RoI features are incorporated into the object-centric temporal modeling to utilize high-level visual semantic information. The model is trained using a one-to-many assignment strategy to facilitate stability. Our framework, named DVPE, achieves state-of-the-art performance (57.2% mAP and 64.5% NDS) on the nuScenes test set.Codes will be available at https://github.com/dop0/DVPE.
Keywords:
Robotics: ROB: Robotics and vision
Computer Vision: CV: 3D computer vision
Computer Vision: CV: Recognition (object detection, categorization)
Robotics: ROB: Perception