Signed Graph Metric Learning via Gershgorin Disc Perfect Alignment

Given a convex and differentiable objective Q(M) for a real symmetric matrix M in the positive definite (PD) cone-used to compute Mahalanobis distances-we propose a fast general metric learning framework that is entirely projection-free. We first assume that M resides in a space S of generalized gra...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 10 vom: 23. Okt., Seite 7219-7234
Auteur principal: Yang, Cheng (Auteur)
Autres auteurs: Cheung, Gene, Hu, Wei
Format: Article en ligne
Langue:English
Publié: 2022
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article
Description
Résumé:Given a convex and differentiable objective Q(M) for a real symmetric matrix M in the positive definite (PD) cone-used to compute Mahalanobis distances-we propose a fast general metric learning framework that is entirely projection-free. We first assume that M resides in a space S of generalized graph Laplacian matrices corresponding to balanced signed graphs. M ∈ S that is also PD is called a graph metric matrix. Unlike low-rank metric matrices common in the literature, S includes the important diagonal-only matrices as a special case. The key theorem to circumvent full eigen-decomposition and enable fast metric matrix optimization is Gershgorin disc perfect alignment (GDPA): given M ∈ S and diagonal matrix S, where Sii = 1/vi and v is the first eigenvector of M, we prove that Gershgorin disc left-ends of similarity transform B = SMS-1 are perfectly aligned at the smallest eigenvalue λmin. Using this theorem, we replace the PD cone constraint in the metric learning problem with tightest possible linear constraints per iteration, so that the alternating optimization of the diagonal / off-diagonal terms in M can be solved efficiently as linear programs via the Frank-Wolfe method. We update v using Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) with warm start as entries in M are optimized successively. Experiments show that our graph metric optimization is significantly faster than cone-projection schemes, and produces competitive binary classification performance
Description:Date Revised 16.09.2022
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1939-3539
DOI:10.1109/TPAMI.2021.3091682