|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM370921798 |
003 |
DE-627 |
005 |
20240907232452.0 |
007 |
cr uuu---uuuuu |
008 |
240412s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2024.3387433
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1526.xml
|
035 |
|
|
|a (DE-627)NLM370921798
|
035 |
|
|
|a (NLM)38602855
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Liang, Weixuan
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a On the Consistency and Large-Scale Extension of Multiple Kernel Clustering
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 06.09.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a Existing multiple kernel clustering (MKC) algorithms have two ubiquitous problems. From the theoretical perspective, most MKC algorithms lack sufficient theoretical analysis, especially the consistency of learned parameters, such as the kernel weights. From the practical perspective, the high complexity makes MKC unable to handle large-scale datasets. This paper tries to address the above two issues. We first make a consistency analysis of an influential MKC method named Simple Multiple Kernel k-Means (SimpleMKKM). Specifically, suppose that ∧γn are the kernel weights learned by SimpleMKKM from the training samples. We also define the expected version of SimpleMKKM and denote its solution as γ*. We establish an upper bound of ||∧γn-γ*||∞ in the order of ~O(1/√n), where n is the sample number. Based on this result, we also derive its excess clustering risk calculated by a standard clustering loss function. For the large-scale extension, we replace the eigen decomposition of SimpleMKKM with singular value decomposition (SVD). Consequently, the complexity can be decreased to O(n) such that SimpleMKKM can be implemented on large-scale datasets. We then deduce several theoretical results to verify the approximation ability of the proposed SVD-based method. The results of comprehensive experiments demonstrate the superiority of the proposed method
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Tang, Chang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Xinwang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Yong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Liu, Jiyuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhu, En
|e verfasserin
|4 aut
|
700 |
1 |
|
|a He, Kunlun
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 46(2024), 10 vom: 01. Sept., Seite 6935-6947
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:46
|g year:2024
|g number:10
|g day:01
|g month:09
|g pages:6935-6947
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2024.3387433
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 46
|j 2024
|e 10
|b 01
|c 09
|h 6935-6947
|