Online Asymmetric Metric Learning With Multi-Layer Similarity Aggregation for Cross-Modal Retrieval
Cross-modal retrieval has attracted intensive attention in recent years, where a substantial yet challenging problem is how to measure the similarity between heterogeneous data modalities. Despite using modality-specific representation learning techniques, most existing shallow or deep models treat...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 28(2019), 9 vom: 03. Sept., Seite 4299-4312 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2019
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Cross-modal retrieval has attracted intensive attention in recent years, where a substantial yet challenging problem is how to measure the similarity between heterogeneous data modalities. Despite using modality-specific representation learning techniques, most existing shallow or deep models treat different modalities equally and neglect the intrinsic modality heterogeneity and information imbalance among images and texts. In this paper, we propose an online similarity function learning framework to learn the metric that can well reflect the cross-modal semantic relation. Considering that multiple CNN feature layers naturally represent visual information from low-level visual patterns to high-level semantic abstraction, we propose a new asymmetric image-text similarity formulation which aggregates the layer-wise visual-textual similarities parameterized by different bilinear parameter matrices. To effectively learn the aggregated similarity function, we develop three different similarity combination strategies, i.e., average kernel, multiple kernel learning, and layer gating. The former two kernel-based strategies assign uniform weights on different layers to all data pairs; the latter works on the original feature representation and assigns instance-aware weights on different layers to different data pairs, and they are all learned by preserving the bi-directional relative similarity expressed by a large number of cross-modal training triplets. The experiments conducted on three public datasets well demonstrate the effectiveness of our methods |
---|---|
Beschreibung: | Date Revised 03.07.2019 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2019.2908774 |