|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM261341634 |
003 |
DE-627 |
005 |
20231224195341.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2016.2578323
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0871.xml
|
035 |
|
|
|a (DE-627)NLM261341634
|
035 |
|
|
|a (NLM)27295652
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Weizhong
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Sparse Learning with Stochastic Composite Optimization
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 25.10.2018
|
500 |
|
|
|a Date Revised 25.10.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT)
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Zhang, Lijun
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jin, Zhongming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jin, Rong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Cai, Deng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Xuelong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liang, Ronghua
|e verfasserin
|4 aut
|
700 |
1 |
|
|a He, Xiaofei
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 39(2017), 6 vom: 01. Juni, Seite 1223-1236
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:39
|g year:2017
|g number:6
|g day:01
|g month:06
|g pages:1223-1236
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2016.2578323
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 39
|j 2017
|e 6
|b 01
|c 06
|h 1223-1236
|