|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM304989967 |
003 |
DE-627 |
005 |
20231225120200.0 |
007 |
cr uuu---uuuuu |
008 |
231225s2021 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2019.2962683
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1016.xml
|
035 |
|
|
|a (DE-627)NLM304989967
|
035 |
|
|
|a (NLM)31899413
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Jabi, Mohammed
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Deep Clustering
|b On the Link Between Discriminative Models and K-Means
|
264 |
|
1 |
|c 2021
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 29.09.2021
|
500 |
|
|
|a Date Revised 29.09.2021
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a In the context of recent deep clustering studies, discriminative models dominate the literature and report the most competitive performances. These models learn a deep discriminative neural network classifier in which the labels are latent. Typically, they use multinomial logistic regression posteriors and parameter regularization, as is very common in supervised learning. It is generally acknowledged that discriminative objective functions (e.g., those based on the mutual information or the KL divergence) are more flexible than generative approaches (e.g., K-means) in the sense that they make fewer assumptions about the data distributions and, typically, yield much better unsupervised deep learning results. On the surface, several recent discriminative models may seem unrelated to K-means. This study shows that these models are, in fact, equivalent to K-means under mild conditions and common posterior models and parameter regularization. We prove that, for the commonly used logistic regression posteriors, maximizing the L2 regularized mutual information via an approximate alternating direction method (ADM) is equivalent to minimizing a soft and regularized K-means loss. Our theoretical analysis not only connects directly several recent state-of-the-art discriminative models to K-means, but also leads to a new soft and regularized deep K-means algorithm, which yields competitive performance on several image clustering benchmarks
|
650 |
|
4 |
|a Journal Article
|
650 |
|
4 |
|a Research Support, Non-U.S. Gov't
|
700 |
1 |
|
|a Pedersoli, Marco
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Mitiche, Amar
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ayed, Ismail Ben
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 43(2021), 6 vom: 01. Juni, Seite 1887-1896
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:43
|g year:2021
|g number:6
|g day:01
|g month:06
|g pages:1887-1896
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2019.2962683
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 43
|j 2021
|e 6
|b 01
|c 06
|h 1887-1896
|