Compact and Discriminative Descriptor Inference Using Multi-Cues

Feature descriptors around local interest points are widely used in human action recognition both for images and videos. However, each kind of descriptors describes the local characteristics around the reference point only from one cue. To enhance the descriptive and discriminative ability from mult...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 24(2015), 12 vom: 25. Dez., Seite 5114-26
1. Verfasser: Han, Yahong (VerfasserIn)
Weitere Verfasser: Yang, Yi, Wu, Fei, Hong, Richang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2015
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
Beschreibung
Zusammenfassung:Feature descriptors around local interest points are widely used in human action recognition both for images and videos. However, each kind of descriptors describes the local characteristics around the reference point only from one cue. To enhance the descriptive and discriminative ability from multiple cues, this paper proposes a descriptor learning framework to optimize the descriptors at the source by learning a projection from multiple descriptors' spaces to a new Euclidean space. In this space, multiple cues and characteristics of different descriptors are fused and complemented for each other. In order to make the new descriptor more discriminative, we learn the multi-cue projection by the minimization of the ratio of within-class scatter to between-class scatter, and therefore, the discriminative ability of the projected descriptor is enhanced. In the experiment, we evaluate our framework on the tasks of action recognition from still images and videos. Experimental results on two benchmark image and two benchmark video data sets demonstrate the effectiveness and better performance of our method
Beschreibung:Date Completed 03.02.2016
Date Revised 27.01.2016
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2015.2479917