|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM355719983 |
003 |
DE-627 |
005 |
20231226064934.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2023 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2023.3266659
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1185.xml
|
035 |
|
|
|a (DE-627)NLM355719983
|
035 |
|
|
|a (NLM)37067971
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Song, Ze
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a FSNet
|b Focus Scanning Network for Camouflaged Object Detection
|
264 |
|
1 |
|c 2023
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 24.04.2023
|
500 |
|
|
|a Date Revised 24.04.2023
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Camouflaged object detection (COD) aims to discover objects that blend in with the background due to similar colors or textures, etc. Existing deep learning methods do not systematically illustrate the key tasks in COD, which seriously hinders the improvement of its performance. In this paper, we introduce the concept of focus areas that represent some regions containing discernable colors or textures, and develop a two-stage focus scanning network for camouflaged object detection. Specifically, a novel encoder-decoder module is first designed to determine a region where the focus areas may appear. In this process, a multi-layer Swin transformer is deployed to encode global context information between the object and the background, and a novel cross-connection decoder is proposed to fuse cross-layer textures or semantics. Then, we utilize the multi-scale dilated convolution to obtain discriminative features with different scales in focus areas. Meanwhile, the dynamic difficulty aware loss is designed to guide the network paying more attention to structural details. Extensive experimental results on the benchmarks, including CAMO, CHAMELEON, COD10K, and NC4K, illustrate that the proposed method performs favorably against other state-of-the-art methods
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Kang, Xudong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wei, Xiaohui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Haibo
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Dian, Renwei
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Shutao
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 32(2023) vom: 17., Seite 2267-2278
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:32
|g year:2023
|g day:17
|g pages:2267-2278
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2023.3266659
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 32
|j 2023
|b 17
|h 2267-2278
|