Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training
Quantization is a promising technique to reduce the computation and storage costs of DNNs. Low-bit ( ≤ 8 bits) precision training remains an open problem due to the difficulty of gradient quantization. In this paper, we find two long-standing misunderstandings of the bias of gradient quantization no...
Ausführliche Beschreibung
Bibliographische Detailangaben
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 31(2022) vom: 02., Seite 7006-7019
|
1. Verfasser: |
Liu, Chang
(VerfasserIn) |
Weitere Verfasser: |
Zhang, Xishan,
Zhang, Rui,
Li, Ling,
Zhou, Shiyi,
Huang, Di,
Li, Zhen,
Du, Zidong,
Liu, Shaoli,
Chen, Tianshi |
Format: | Online-Aufsatz
|
Sprache: | English |
Veröffentlicht: |
2022
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|
Schlagworte: | Journal Article |