Harvesting image databases from the Web

The objective of this work is to automatically generate a large number of images for a specified object class. A multimodal approach employing both text, metadata, and visual features is used to gather many high-quality images from the Web. Candidate images are obtained by a text-based Web search qu...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 33(2011), 4 vom: 01. Apr., Seite 754-66
1. Verfasser: Schroff, Florian (VerfasserIn)
Weitere Verfasser: Criminisi, Antonio, Zisserman, Andrew
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2011
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM205954324
003 DE-627
005 20231223234923.0
007 cr uuu---uuuuu
008 231223s2011 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2010.133  |2 doi 
028 5 2 |a pubmed24n0687.xml 
035 |a (DE-627)NLM205954324 
035 |a (NLM)21330688 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Schroff, Florian  |e verfasserin  |4 aut 
245 1 0 |a Harvesting image databases from the Web 
264 1 |c 2011 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 22.07.2011 
500 |a Date Revised 18.02.2011 
500 |a published: Print 
500 |a Citation Status MEDLINE 
520 |a The objective of this work is to automatically generate a large number of images for a specified object class. A multimodal approach employing both text, metadata, and visual features is used to gather many high-quality images from the Web. Candidate images are obtained by a text-based Web search querying on the object identifier (e.g., the word penguin). The Webpages and the images they contain are downloaded. The task is then to remove irrelevant images and rerank the remainder. First, the images are reranked based on the text surrounding the image and metadata features. A number of methods are compared for this reranking. Second, the top-ranked images are used as (noisy) training data and an SVM visual classifier is learned to improve the ranking further. We investigate the sensitivity of the cross-validation procedure to this noisy training data. The principal novelty of the overall method is in combining text/metadata and visual features in order to achieve a completely automatic ranking of the images. Examples are given for a selection of animals, vehicles, and other classes, totaling 18 classes. The results are assessed by precision/recall curves on ground-truth annotated data and by comparison to previous approaches, including those of Berg and Forsyth and Fergus et al 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Criminisi, Antonio  |e verfasserin  |4 aut 
700 1 |a Zisserman, Andrew  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 33(2011), 4 vom: 01. Apr., Seite 754-66  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:33  |g year:2011  |g number:4  |g day:01  |g month:04  |g pages:754-66 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2010.133  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 33  |j 2011  |e 4  |b 01  |c 04  |h 754-66