Latency-aware Unified Dynamic Networks for Efficient Image Recognition

Dynamic computation has emerged as a promising strategy to improve the inference efficiency of deep networks. It allows selective activation of various computing units, such as layers or convolution channels, or adaptive allocation of computation to highly informative spatial regions in image featur...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2024) vom: 25. Apr.
1. Verfasser: Han, Yizeng (VerfasserIn)
Weitere Verfasser: Liu, Zeyu, Yuan, Zhihang, Pu, Yifan, Wang, Chaofei, Song, Shiji, Huang, Gao
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM371514231
003 DE-627
005 20240426235316.0
007 cr uuu---uuuuu
008 240426s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3393530  |2 doi 
028 5 2 |a pubmed24n1388.xml 
035 |a (DE-627)NLM371514231 
035 |a (NLM)38662565 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Han, Yizeng  |e verfasserin  |4 aut 
245 1 0 |a Latency-aware Unified Dynamic Networks for Efficient Image Recognition 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 25.04.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Dynamic computation has emerged as a promising strategy to improve the inference efficiency of deep networks. It allows selective activation of various computing units, such as layers or convolution channels, or adaptive allocation of computation to highly informative spatial regions in image features, thus significantly reducing unnecessary computations conditioned on each input sample. However, the practical efficiency of dynamic models does not always correspond to theoretical outcomes. This discrepancy stems from three key challenges: 1) The absence of a unified formulation for various dynamic inference paradigms, owing to the fragmented research landscape; 2) The undue emphasis on algorithm design while neglecting scheduling strategies, which are critical for optimizing computational performance and resource utilization in CUDA-enabled GPU settings; and 3) The cumbersome process of evaluating practical latency, as most existing libraries are tailored for static operators. To address these issues, we introduce Latency-Aware Unified Dynamic Networks (LAUDNet), a comprehensive framework that amalgamates three cornerstone dynamic paradigms-spatially-adaptive computation, dynamic layer skipping, and dynamic channel skipping-under a unified formulation. To reconcile theoretical and practical efficiency, LAUDNet integrates algorithmic design with scheduling optimization, assisted by a latency predictor that accurately and efficiently gauges the inference latency of dynamic operators. This latency predictor harmonizes considerations of algorithms, scheduling strategies, and hardware attributes. We empirically validate various dynamic paradigms within the LAUDNet framework across a range of vision tasks, including image classification, object detection, and instance segmentation. Our experiments confirm that LAUDNet effectively narrows the gap between theoretical and real-world efficiency. For example, LAUDNet can reduce the practical latency of its static counterpart, ResNet-101, by over 50% on hardware platforms such as V100, RTX3090, and TX2 GPUs. Furthermore, LAUDNet surpasses competing methods in the trade-off between accuracy and efficiency. Code is available at: https://www.github.com/LeapLabTHU/LAUDNet 
650 4 |a Journal Article 
700 1 |a Liu, Zeyu  |e verfasserin  |4 aut 
700 1 |a Yuan, Zhihang  |e verfasserin  |4 aut 
700 1 |a Pu, Yifan  |e verfasserin  |4 aut 
700 1 |a Wang, Chaofei  |e verfasserin  |4 aut 
700 1 |a Song, Shiji  |e verfasserin  |4 aut 
700 1 |a Huang, Gao  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g PP(2024) vom: 25. Apr.  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:25  |g month:04 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3393530  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 25  |c 04