Multimodal Similarity-Preserving Hashing
We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- a...
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence. - 1979. - 36(2014), 4 vom: 01. Apr., Seite 824-30 |
|---|---|
| 1. Verfasser: | |
| Weitere Verfasser: | , , |
| Format: | Online-Aufsatz |
| Sprache: | English |
| Veröffentlicht: |
2014
|
| Zugriff auf das übergeordnete Werk: | IEEE transactions on pattern analysis and machine intelligence |
| Schlagworte: | Journal Article Research Support, Non-U.S. Gov't |
| Zusammenfassung: | We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks |
|---|---|
| Beschreibung: | Date Completed 27.11.2015 Date Revised 10.09.2015 published: Print Citation Status PubMed-not-MEDLINE |
| ISSN: | 1939-3539 |
| DOI: | 10.1109/TPAMI.2013.225 |