|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM377374709 |
003 |
DE-627 |
005 |
20240918232757.0 |
007 |
cr uuu---uuuuu |
008 |
240910s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2024.3453008
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1538.xml
|
035 |
|
|
|a (DE-627)NLM377374709
|
035 |
|
|
|a (NLM)39250373
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Song, Ze
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Unified and Real-Time Image Geo-Localization via Fine-Grained Overlap Estimation
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 18.09.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Image geo-localization aims to locate a query image from source platform (e.g., drones, street vehicle) by matching it with Geo-tagged reference images from the target platforms (e.g., different satellites). Achieving cross-modal or cross-view real-time (>30fps) image localization with the guaranteed accuracy in a unified framework remains a challenge due to the huge differences in modalities and views between the two platforms. In order to solve this problem, a novel fine-grained overlap estimation based image geo-localization method is proposed in this paper, the core of which is to estimate the salient and subtle overlapping regions in image pairs to ensure correct matching. Specifically, the high-level semantic features of input images are extracted by a deep convolutional neural network. Then, a novel overlap scanning module (OSM) is presented to mine the long-range spatial and channel dependencies of semantic features in various subspaces, thereby identifying fine-grained overlapping regions. Finally, we adopt the triplet ranking loss to guide the proposed network optimization so that the matching regions are as close as possible and the most mismatched regions are as far away as possible. To demonstrate the effectiveness of our FOENet, comprehensive experiments are conducted on three cross-view benchmarks and one cross-modal benchmark. Our FOENet yields better performance in various metrics and the recall accuracy at top 1 (R1) is significantly improved, with a maximum improvement of 70.6%. In addition, the proposed model runs fast on a single RTX 6000, reaching real-time inference speed on all datasets, with the fastest being 82.3 FPS
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Kang, Xudong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wei, Xiaohui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Shutao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Haibo
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 33(2024) vom: 09., Seite 5060-5072
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:33
|g year:2024
|g day:09
|g pages:5060-5072
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2024.3453008
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 33
|j 2024
|b 09
|h 5060-5072
|