Model-Free Distortion Rectification Framework Bridged by Distortion Distribution Map
Recently, learning-based distortion rectification schemes have shown high efficiency. However, most of these methods only focus on a specific camera model with fixed parameters, thus failing to be extended to other models. To avoid such a disadvantage, we propose a model-free distortion rectificatio...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2020) vom: 17. Jan. |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2020
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Recently, learning-based distortion rectification schemes have shown high efficiency. However, most of these methods only focus on a specific camera model with fixed parameters, thus failing to be extended to other models. To avoid such a disadvantage, we propose a model-free distortion rectification framework for the single-shot case, bridged by the distortion distribution map (DDM). Our framework is based on an observation that the pixel-wise distortion information is mathematically regular in a distorted image, despite different models having different types and numbers of distortion parameters. Motivated by this observation, instead of estimating the heterogeneous distortion parameters, we construct a proposed distortion distribution map that intuitively indicates the global distortion features of a distorted image. In addition, we develop a dual-stream feature learning module, benefitting from both the advantages of traditional methods that leverage the local handcrafted feature and learning-based methods that focus on the global semantic feature perception. Due to the sparsity of handcrafted features, we discrete the features into a 2D point map and learn the structure inspired by PointNet. Finally, a multimodal attention fusion module is designed to attentively fuse the local structural and global semantic features, providing the hybrid features for the more reasonable scene recovery. The experimental results demonstrate the excellent generalization ability and more significant performance of our method in both quantitative and qualitative evaluations, compared with the stateof- the-art methods |
---|---|
Beschreibung: | Date Revised 27.02.2024 published: Print-Electronic Citation Status Publisher |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2020.2964523 |