|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM341168378 |
003 |
DE-627 |
005 |
20231226011146.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3175432
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1137.xml
|
035 |
|
|
|a (DE-627)NLM341168378
|
035 |
|
|
|a (NLM)35594231
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Cao, Jinming
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a DO-Conv
|b Depthwise Over-Parameterized Convolutional Layer
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 27.05.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Convolutional layers are the core building blocks of Convolutional Neural Networks (CNNs). In this paper, we propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel. The composition of the two convolutions constitutes an over-parameterization, since it adds learnable parameters, while the resulting linear operation can be expressed by a single convolution layer. We refer to this depthwise over-parameterized convolutional layer as DO-Conv, which is a novel way of over-parameterization. We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs on many classical vision tasks, such as image classification, detection, and segmentation. Moreover, in the inference phase, the depthwise convolution is folded into the conventional convolution, reducing the computation to be exactly equivalent to that of a convolutional layer without over-parameterization. As DO-Conv introduces performance gains without incurring any computational complexity increase for inference, we advocate it as an alternative to the conventional convolutional layer. We open sourced an implementation of DO-Conv in Tensorflow, PyTorch and GluonCV at https://github.com/yangyanli/DO-Conv
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Li, Yangyan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Sun, Mingchao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Ying
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Lischinski, Dani
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Cohen-Or, Daniel
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chen, Baoquan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Tu, Changhe
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 31(2022) vom: 01., Seite 3726-3736
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:31
|g year:2022
|g day:01
|g pages:3726-3736
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3175432
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 31
|j 2022
|b 01
|h 3726-3736
|