|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM366849786 |
003 |
DE-627 |
005 |
20240115232023.0 |
007 |
cr uuu---uuuuu |
008 |
240114s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3347929
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1260.xml
|
035 |
|
|
|a (DE-627)NLM366849786
|
035 |
|
|
|a (NLM)38194375
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Chang, Jiahao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a EI-MVSNet
|b Epipolar-Guided Multi-View Stereo Network With Interval-Aware Label
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 15.01.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Recent learning-based methods demonstrate their strong ability to estimate depth for multi-view stereo reconstruction. However, most of these methods directly extract features via regular or deformable convolutions, and few works consider the alignment of the receptive fields between views while constructing the cost volume. Through analyzing the constraint and inference of previous MVS networks, we find that there are still some shortcomings that hinder the performance. To deal with the above issues, we propose an Epipolar-Guided Multi-View Stereo Network with Interval-Aware Label (EI-MVSNet), which includes an epipolar-guided volume construction module and an interval-aware depth estimation module in a unified architecture for MVS. The proposed EI-MVSNet enjoys several merits. First, in the epipolar-guided volume construction module, we construct cost volume with features from aligned receptive fields between different pairs of reference and source images via epipolar-guided convolutions, which take rotation and scale changes into account. Second, in the interval-aware depth estimation module, we attempt to supervise the cost volume directly and make depth estimation independent of extraneous values by perceiving the upper and lower boundaries, which can achieve fine-grained predictions and enhance the reasoning ability of the network. Extensive experimental results on two standard benchmarks demonstrate that our EI-MVSNet performs favorably against state-of-the-art MVS methods. Specifically, our EI-MVSNet ranks 1st on both intermediate and advanced subsets of the Tanks and Temples benchmark, which verifies the high precision and strong robustness of our model
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a He, Jianfeng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Tianzhu
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yu, Jiyang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wu, Feng
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 33(2024) vom: 09., Seite 753-766
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:33
|g year:2024
|g day:09
|g pages:753-766
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3347929
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 33
|j 2024
|b 09
|h 753-766
|