Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training

Quantization is a promising technique to reduce the computation and storage costs of DNNs. Low-bit ( ≤ 8 bits) precision training remains an open problem due to the difficulty of gradient quantization. In this paper, we find two long-standing misunderstandings of the bias of gradient quantization no...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 02., Seite 7006-7019
1. Verfasser: Liu, Chang (VerfasserIn)
Weitere Verfasser: Zhang, Xishan, Zhang, Rui, Li, Ling, Zhou, Shiyi, Huang, Di, Li, Zhen, Du, Zidong, Liu, Shaoli, Chen, Tianshi
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000caa a22002652c 4500
001 NLM348364687
003 DE-627
005 20250304012847.0
007 cr uuu---uuuuu
008 231226s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2022.3216776  |2 doi 
028 5 2 |a pubmed25n1160.xml 
035 |a (DE-627)NLM348364687 
035 |a (NLM)36322492 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Liu, Chang  |e verfasserin  |4 aut 
245 1 0 |a Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 15.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Quantization is a promising technique to reduce the computation and storage costs of DNNs. Low-bit ( ≤ 8 bits) precision training remains an open problem due to the difficulty of gradient quantization. In this paper, we find two long-standing misunderstandings of the bias of gradient quantization noise. First, the large bias of gradient quantization noise, instead of the variance, is the key factor of training accuracy loss. Second, the widely used stochastic rounding cannot solve the training crash problem caused by the gradient quantization bias in practice. Moreover, we find that the asymmetric distribution of gradients causes a large bias of gradient quantization noise. Based on our findings, we propose a novel adaptive piecewise quantization method to effectively limit the bias of gradient quantization noise. Accordingly, we propose a new data format, Piecewise Fixed Point (PWF), to present data after quantization. We apply our method to different applications including image classification, machine translation, optical character recognition, and text classification. We achieve approximately 1.9 ∼ 3.5× speedup compared with full precision training with an accuracy loss of less than 0.5%. To the best of our knowledge, this is the first work to quantize gradients of all layers to 8 bits in both large-scale CNN and RNN training with negligible accuracy loss 
650 4 |a Journal Article 
700 1 |a Zhang, Xishan  |e verfasserin  |4 aut 
700 1 |a Zhang, Rui  |e verfasserin  |4 aut 
700 1 |a Li, Ling  |e verfasserin  |4 aut 
700 1 |a Zhou, Shiyi  |e verfasserin  |4 aut 
700 1 |a Huang, Di  |e verfasserin  |4 aut 
700 1 |a Li, Zhen  |e verfasserin  |4 aut 
700 1 |a Du, Zidong  |e verfasserin  |4 aut 
700 1 |a Liu, Shaoli  |e verfasserin  |4 aut 
700 1 |a Chen, Tianshi  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g 31(2022) vom: 02., Seite 7006-7019  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnas 
773 1 8 |g volume:31  |g year:2022  |g day:02  |g pages:7006-7019 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2022.3216776  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 31  |j 2022  |b 02  |h 7006-7019