Unifying Dimensions : A Linear Adaptive Mixer for Lightweight Image Super-Resolution

Window-based Transformers have demonstrated outstanding performance in super-resolution due to their adaptive modeling capabilities through local self-attention (SA). However, they exhibit higher computational complexity and inference latency than convolutional neural networks. In this paper, we fir...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2025) vom: 17. Okt.
1. Verfasser: Hu, Zhenyu (VerfasserIn)
Weitere Verfasser: Sun, Wanjie
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
LEADER 01000naa a22002652c 4500
001 NLM394217071
003 DE-627
005 20251018232424.0
007 cr uuu---uuuuu
008 251018s2025 xx |||||o 00| ||eng c
024 7 |a 10.1109/TIP.2025.3620672  |2 doi 
028 5 2 |a pubmed25n1603.xml 
035 |a (DE-627)NLM394217071 
035 |a (NLM)41105539 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Hu, Zhenyu  |e verfasserin  |4 aut 
245 1 0 |a Unifying Dimensions  |b A Linear Adaptive Mixer for Lightweight Image Super-Resolution 
264 1 |c 2025 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 17.10.2025 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Window-based Transformers have demonstrated outstanding performance in super-resolution due to their adaptive modeling capabilities through local self-attention (SA). However, they exhibit higher computational complexity and inference latency than convolutional neural networks. In this paper, we first identify that the adaptability of the Transformers is derived from their adaptive spatial aggregation and advanced structural design, while their high latency results from the computational costs and memory layout transformations. To address these limitations and simulate the aggregation approach, we propose an efficient convolution-based Focal Separable Attention (FSA) mechanism that enables long-range dynamic modeling with linear computational complexity. Additionally, we introduce a dual-branch structure integrated with an ultra-lightweight Information Exchange Module (IEM) to enhance information aggregation within the token mixing process. Finally, we modify the existing spatial-gate-based feedforward neural networks by incorporating a self-gate mechanism to preserve high-dimensional channel information, thereby enabling the modeling of more complex relationships. This modification is referred to as the Dual-Gated Feed-Forward Network (DGFN). With these advancements, we construct a convolution-based Transformer framework named the Linear Adaptive Mixer Network (LAMNet). Extensive experiments demonstrate that LAMNet performs better than existing Transformer-based methods while maintaining the computational efficiency of convolutional neural networks, which can achieve a speedup 3× of the inference time. The code will be publicly available at: https://github.com/zononhzy/LAMNet 
650 4 |a Journal Article 
700 1 |a Sun, Wanjie  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society  |d 1992  |g PP(2025) vom: 17. Okt.  |w (DE-627)NLM09821456X  |x 1941-0042  |7 nnas 
773 1 8 |g volume:PP  |g year:2025  |g day:17  |g month:10 
856 4 0 |u http://dx.doi.org/10.1109/TIP.2025.3620672  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2025  |b 17  |c 10