Unifying Dimensions : A Linear Adaptive Mixer for Lightweight Image Super-Resolution
Window-based Transformers have demonstrated outstanding performance in super-resolution due to their adaptive modeling capabilities through local self-attention (SA). However, they exhibit higher computational complexity and inference latency than convolutional neural networks. In this paper, we fir...
| Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2025) vom: 17. Okt. |
|---|---|
| 1. Verfasser: | |
| Weitere Verfasser: | |
| Format: | Online-Aufsatz |
| Sprache: | English |
| Veröffentlicht: |
2025
|
| Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
| Schlagworte: | Journal Article |
| Zusammenfassung: | Window-based Transformers have demonstrated outstanding performance in super-resolution due to their adaptive modeling capabilities through local self-attention (SA). However, they exhibit higher computational complexity and inference latency than convolutional neural networks. In this paper, we first identify that the adaptability of the Transformers is derived from their adaptive spatial aggregation and advanced structural design, while their high latency results from the computational costs and memory layout transformations. To address these limitations and simulate the aggregation approach, we propose an efficient convolution-based Focal Separable Attention (FSA) mechanism that enables long-range dynamic modeling with linear computational complexity. Additionally, we introduce a dual-branch structure integrated with an ultra-lightweight Information Exchange Module (IEM) to enhance information aggregation within the token mixing process. Finally, we modify the existing spatial-gate-based feedforward neural networks by incorporating a self-gate mechanism to preserve high-dimensional channel information, thereby enabling the modeling of more complex relationships. This modification is referred to as the Dual-Gated Feed-Forward Network (DGFN). With these advancements, we construct a convolution-based Transformer framework named the Linear Adaptive Mixer Network (LAMNet). Extensive experiments demonstrate that LAMNet performs better than existing Transformer-based methods while maintaining the computational efficiency of convolutional neural networks, which can achieve a speedup 3× of the inference time. The code will be publicly available at: https://github.com/zononhzy/LAMNet |
|---|---|
| Beschreibung: | Date Revised 17.10.2025 published: Print-Electronic Citation Status Publisher |
| ISSN: | 1941-0042 |
| DOI: | 10.1109/TIP.2025.3620672 |