Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction

Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing attention from both academic and industrial communities due to its minimal data needs and high time efficiency. However, many current methods fail to account for the complex interactions between quantized weights...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 47(2025), 4 vom: 15. Apr., Seite 2676-2692
Auteur principal: Zhong, Yunshan (Auteur)
Autres auteurs: Huang, You, Hu, Jiawei, Zhang, Yuxin, Ji, Rongrong
Format: Article en ligne
Langue:English
Publié: 2025
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article
LEADER 01000naa a22002652c 4500
001 NLM385073550
003 DE-627
005 20250508065330.0
007 cr uuu---uuuuu
008 250508s2025 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2025.3528042  |2 doi 
028 5 2 |a pubmed25n1337.xml 
035 |a (DE-627)NLM385073550 
035 |a (NLM)40031001 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhong, Yunshan  |e verfasserin  |4 aut 
245 1 0 |a Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction 
264 1 |c 2025 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.03.2025 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing attention from both academic and industrial communities due to its minimal data needs and high time efficiency. However, many current methods fail to account for the complex interactions between quantized weights and activations, resulting in significant quantization errors and suboptimal performance. This paper presents ERQ, an innovative two-step PTQ method specifically crafted to reduce quantization errors arising from activation and weight quantization sequentially. The first step, Activation quantization error reduction (Aqer), first applies Reparameterization Initialization aimed at mitigating initial quantization errors in high-variance activations. Then, it further mitigates the errors by formulating a Ridge Regression problem, which updates the weights maintained at full-precision using a closed-form solution. The second step, Weight quantization error reduction (Wqer), first applies Dual Uniform Quantization to handle weights with numerous outliers, which arise from adjustments made during Reparameterization Initialization, thereby reducing initial weight quantization errors. Then, it employs an iterative approach to further tackle the errors. In each iteration, it adopts Rounding Refinement that uses an empirically derived, efficient proxy to refine the rounding directions of quantized weights, complemented by a Ridge Regression solver to reduce the errors. Comprehensive experimental results demonstrate ERQ's superior performance across various ViTs variants and tasks. For example, ERQ surpasses the state-of-the-art GPTQ by a notable 36.81% in accuracy for W3A4 ViT-S 
650 4 |a Journal Article 
700 1 |a Huang, You  |e verfasserin  |4 aut 
700 1 |a Hu, Jiawei  |e verfasserin  |4 aut 
700 1 |a Zhang, Yuxin  |e verfasserin  |4 aut 
700 1 |a Ji, Rongrong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 47(2025), 4 vom: 15. Apr., Seite 2676-2692  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:47  |g year:2025  |g number:4  |g day:15  |g month:04  |g pages:2676-2692 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2025.3528042  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 47  |j 2025  |e 4  |b 15  |c 04  |h 2676-2692