Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems

Many classical Computer Vision problems, such as essential matrix computation and pose estimation from 3D to 2D correspondences, can be tackled by solving a linear least-square problem, which can be done by finding the eigenvector corresponding to the smallest, or zero, eigenvalue of a matrix repres...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 43(2021), 9 vom: 09. Sept., Seite 3167-3182
1. Verfasser: Dang, Zheng (VerfasserIn)
Weitere Verfasser: Yi, Kwang Moo, Hu, Yinlin, Wang, Fei, Fua, Pascal, Salzmann, Mathieu
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2021
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM307379922
003 DE-627
005 20231225125432.0
007 cr uuu---uuuuu
008 231225s2021 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2020.2978812  |2 doi 
028 5 2 |a pubmed24n1024.xml 
035 |a (DE-627)NLM307379922 
035 |a (NLM)32149625 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Dang, Zheng  |e verfasserin  |4 aut 
245 1 0 |a Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 05.08.2021 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Many classical Computer Vision problems, such as essential matrix computation and pose estimation from 3D to 2D correspondences, can be tackled by solving a linear least-square problem, which can be done by finding the eigenvector corresponding to the smallest, or zero, eigenvalue of a matrix representing a linear system. Incorporating this in deep learning frameworks would allow us to explicitly encode known notions of geometry, instead of having the network implicitly learn them from data. However, performing eigendecomposition within a network requires the ability to differentiate this operation. While theoretically doable, this introduces numerical instability in the optimization process in practice. In this paper, we introduce an eigendecomposition-free approach to training a deep network whose loss depends on the eigenvector corresponding to a zero eigenvalue of a matrix predicted by the network. We demonstrate that our approach is much more robust than explicit differentiation of the eigendecomposition using two general tasks, outlier rejection and denoising, with several practical examples including wide-baseline stereo, the perspective-n-point problem, and ellipse fitting. Empirically, our method has better convergence properties and yields state-of-the-art results 
650 4 |a Journal Article 
700 1 |a Yi, Kwang Moo  |e verfasserin  |4 aut 
700 1 |a Hu, Yinlin  |e verfasserin  |4 aut 
700 1 |a Wang, Fei  |e verfasserin  |4 aut 
700 1 |a Fua, Pascal  |e verfasserin  |4 aut 
700 1 |a Salzmann, Mathieu  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 43(2021), 9 vom: 09. Sept., Seite 3167-3182  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:43  |g year:2021  |g number:9  |g day:09  |g month:09  |g pages:3167-3182 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2020.2978812  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 43  |j 2021  |e 9  |b 09  |c 09  |h 3167-3182