Quantifying and transferring contextual information in object detection

Context is critical for reducing the uncertainty in object detection. However, context modeling is challenging because there are often many different types of contextual information coexisting with different degrees of relevance to the detection of target object(s) in different images. It is therefo...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 34(2012), 4 vom: 13. Apr., Seite 762-77
Auteur principal: Zheng, Wei-Shi (Auteur)
Autres auteurs: Gong, Shaogang, Xiang, Tao
Format: Article en ligne
Langue:English
Publié: 2012
Accès à la collection:IEEE transactions on pattern analysis and machine intelligence
Sujets:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000caa a22002652 4500
001 NLM21078024X
003 DE-627
005 20250213035756.0
007 cr uuu---uuuuu
008 231224s2012 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2011.164  |2 doi 
028 5 2 |a pubmed25n0702.xml 
035 |a (DE-627)NLM21078024X 
035 |a (NLM)21844619 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zheng, Wei-Shi  |e verfasserin  |4 aut 
245 1 0 |a Quantifying and transferring contextual information in object detection 
264 1 |c 2012 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 10.09.2012 
500 |a Date Revised 31.05.2012 
500 |a published: Print 
500 |a Citation Status MEDLINE 
520 |a Context is critical for reducing the uncertainty in object detection. However, context modeling is challenging because there are often many different types of contextual information coexisting with different degrees of relevance to the detection of target object(s) in different images. It is therefore crucial to devise a context model to automatically quantify and select the most effective contextual information for assisting in detecting the target object. Nevertheless, the diversity of contextual information means that learning a robust context model requires a larger training set than learning the target object appearance model, which may not be available in practice. In this work, a novel context modeling framework is proposed without the need for any prior scene segmentation or context annotation. We formulate a polar geometric context descriptor for representing multiple types of contextual information. In order to quantify context, we propose a new maximum margin context (MMC) model to evaluate and measure the usefulness of contextual information directly and explicitly through a discriminant context inference method. Furthermore, to address the problem of context learning with limited data, we exploit the idea of transfer learning based on the observation that although two categories of objects can have very different visual appearance, there can be similarity in their context and/or the way contextual information helps to distinguish target objects from nontarget objects. To that end, two novel context transfer learning models are proposed which utilize training samples from source object classes to improve the learning of the context model for a target object class based on a joint maximum margin learning framework. Experiments are carried out on PASCAL VOC2005 and VOC2007 data sets, a luggage detection data set extracted from the i-LIDS data set, and a vehicle detection data set extracted from outdoor surveillance footage. Our results validate the effectiveness of the proposed models for quantifying and transferring contextual information, and demonstrate that they outperform related alternative context models 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Gong, Shaogang  |e verfasserin  |4 aut 
700 1 |a Xiang, Tao  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 34(2012), 4 vom: 13. Apr., Seite 762-77  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:34  |g year:2012  |g number:4  |g day:13  |g month:04  |g pages:762-77 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2011.164  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 34  |j 2012  |e 4  |b 13  |c 04  |h 762-77