Divide and Conquer : Improving Multi-Camera 3D Perception With 2D Semantic-Depth Priors and Input-Dependent Queries

3D perception tasks, such as 3D object detection and Bird's-Eye-View (BEV) segmentation using multi-camera images, have drawn significant attention recently. Despite the fact that accurately estimating both semantic and 3D scene layouts are crucial for this task, existing techniques often negle...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 18., Seite 897-909
1. Verfasser: Song, Qi (VerfasserIn)
Weitere Verfasser: Hu, Qingyong, Zhang, Chi, Chen, Yongquan, Huang, Rui
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM367271982
003 DE-627
005 20240124232056.0
007 cr uuu---uuuuu
008 240119s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2024.3352808  |2 doi 
028 5 2 |a pubmed24n1269.xml 
035 |a (DE-627)NLM367271982 
035 |a (NLM)38236678 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Song, Qi  |e verfasserin  |4 aut 
245 1 0 |a Divide and Conquer  |b Improving Multi-Camera 3D Perception With 2D Semantic-Depth Priors and Input-Dependent Queries 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 24.01.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a 3D perception tasks, such as 3D object detection and Bird's-Eye-View (BEV) segmentation using multi-camera images, have drawn significant attention recently. Despite the fact that accurately estimating both semantic and 3D scene layouts are crucial for this task, existing techniques often neglect the synergistic effects of semantic and depth cues, leading to the occurrence of classification and position estimation errors. Additionally, the input-independent nature of initial queries also limits the learning capacity of Transformer-based models. To tackle these challenges, we propose an input-aware Transformer framework that leverages Semantics and Depth as priors (named SDTR). Our approach involves the use of an S-D Encoder that explicitly models semantic and depth priors, thereby disentangling the learning process of object categorization and position estimation. Moreover, we introduce a Prior-guided Query Builder that incorporates the semantic prior into the initial queries of the Transformer, resulting in more effective input-aware queries. Extensive experiments on the nuScenes and Lyft benchmarks demonstrate the state-of-the-art performance of our method in both 3D object detection and BEV segmentation tasks 
650 4 |a Journal Article 
700 1 |a Hu, Qingyong  |e verfasserin  |4 aut 
700 1 |a Zhang, Chi  |e verfasserin  |4 aut 
700 1 |a Chen, Yongquan  |e verfasserin  |4 aut 
700 1 |a Huang, Rui  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 33(2024) vom: 18., Seite 897-909  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnns 
773 1 8 |g volume:33  |g year:2024  |g day:18  |g pages:897-909 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2024.3352808  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2024  |b 18  |h 897-909