Graph Neural Network and Spatiotemporal Transformer Attention for 3D Video Object Detection From Point Clouds

Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., point cloud videos. We empirically categorize the temporal information into short-term and long-te...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 45(2023), 8 vom: 09. Aug., Seite 9822-9835
1. Verfasser: Yin, Junbo (VerfasserIn)
Weitere Verfasser: Shen, Jianbing, Gao, Xin, Crandall, David J, Yang, Ruigang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2023
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM332903680
003 DE-627
005 20231225220532.0
007 cr uuu---uuuuu
008 231225s2023 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2021.3125981  |2 doi 
028 5 2 |a pubmed24n1109.xml 
035 |a (DE-627)NLM332903680 
035 |a (NLM)34752380 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Yin, Junbo  |e verfasserin  |4 aut 
245 1 0 |a Graph Neural Network and Spatiotemporal Transformer Attention for 3D Video Object Detection From Point Clouds 
264 1 |c 2023 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 03.07.2023 
500 |a Date Revised 03.07.2023 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., point cloud videos. We empirically categorize the temporal information into short-term and long-term patterns. To encode the short-term data, we present a Grid Message Passing Network (GMPNet), which considers each grid (i.e., the grouped points) as a node and constructs a k-NN graph with the neighbor grids. To update features for a grid, GMPNet iteratively collects information from its neighbors, thus mining the motion cues in grids from nearby frames. To further aggregate long-term frames, we propose an Attentive Spatiotemporal Transformer GRU (AST-GRU), which contains a Spatial Transformer Attention (STA) module and a Temporal Transformer Attention (TTA) module. STA and TTA enhance the vanilla GRU to focus on small objects and better align moving objects. Our overall framework supports both online and offline video object detection in point clouds. We implement our algorithm based on prevalent anchor-based and anchor-free detectors. Evaluation results on the challenging nuScenes benchmark show superior performance of our method, achieving first on the leaderboard (at the time of paper submission) without any "bells and whistles." Our source code is available at https://github.com/shenjianbing/GMP3D 
650 4 |a Journal Article 
700 1 |a Shen, Jianbing  |e verfasserin  |4 aut 
700 1 |a Gao, Xin  |e verfasserin  |4 aut 
700 1 |a Crandall, David J  |e verfasserin  |4 aut 
700 1 |a Yang, Ruigang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 45(2023), 8 vom: 09. Aug., Seite 9822-9835  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:45  |g year:2023  |g number:8  |g day:09  |g month:08  |g pages:9822-9835 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2021.3125981  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 45  |j 2023  |e 8  |b 09  |c 08  |h 9822-9835