LEGO-MM : LEarning Structured Model by Probabilistic loGic Ontology Tree for MultiMedia

Recent advances in multimedia ontology have resulted in a number of concept models, e.g., large-scale concept for multimedia and Mediamill 101, which are accessible and public to other researchers. However, most current research effort still focuses on building new concepts from scratch, very few wo...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 26(2017), 1 vom: 15. Jan., Seite 196-207
1. Verfasser: Jinhui Tang (VerfasserIn)
Weitere Verfasser: Shiyu Chang, Guo-Jun Qi, Qi Tian, Yong Rui, Huang, Thomas S
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2017
Zugriff auf das übergeordnete Werk:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Schlagworte:Journal Article
Beschreibung
Zusammenfassung:Recent advances in multimedia ontology have resulted in a number of concept models, e.g., large-scale concept for multimedia and Mediamill 101, which are accessible and public to other researchers. However, most current research effort still focuses on building new concepts from scratch, very few work explores the appropriate method to construct new concepts upon the existing models already in the warehouse. To address this issue, we propose a new framework in this paper, termed LEarning Structured Model by Probabilistic loGic Ontology Tree for MultiM edia (LEGO 1 -MM), which can seamlessly integrate both the new target training examples and the existing primitive concept models to infer the more complex concept models. LEGO-MM treats the primitive concept models as the lego toy to potentially construct an unlimited vocabulary of new concepts. Specifically, we first formulate the logic operations to be the lego connectors to combine the existing concept models hierarchically in probabilistic logic ontology trees. Then, we incorporate new target training information simultaneously to efficiently disambiguate the underlying logic tree and correct the error propagation. Extensive experiments are conducted on a large vehicle domain data set from ImageNet. The results demonstrate that LEGO-MM has significantly superior performance over the existing state-of-the-art methods, which build new concept models from scratch
Beschreibung:Date Revised 20.11.2019
published: Print-Electronic
Citation Status PubMed-not-MEDLINE
ISSN:1941-0042
DOI:10.1109/TIP.2016.2612825