|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM344336948 |
003 |
DE-627 |
005 |
20231226022531.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2022 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2022.3193763
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1147.xml
|
035 |
|
|
|a (DE-627)NLM344336948
|
035 |
|
|
|a (NLM)35914044
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wang, Zhiling
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Ingredient-Guided Region Discovery and Relationship Modeling for Food Category-Ingredient Prediction
|
264 |
|
1 |
|c 2022
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 08.08.2022
|
500 |
|
|
|a Date Revised 08.08.2022
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status MEDLINE
|
520 |
|
|
|a Recognizing the category and its ingredient composition from food images facilitates automatic nutrition estimation, which is crucial to various health relevant applications, such as nutrition intake management and healthy diet recommendation. Since food is composed of ingredients, discovering ingredient-relevant visual regions can help identify its corresponding category and ingredients. Furthermore, various ingredient relationships like co-occurrence and exclusion are also critical for this task. For that, we propose an ingredient-oriented multi-task food category-ingredient joint learning framework for simultaneous food recognition and ingredient prediction. This framework mainly involves learning an ingredient dictionary for ingredient-relevant visual region discovery and building an ingredient-based semantic-visual graph for ingredient relationship modeling. To obtain ingredient-relevant visual regions, we build an ingredient dictionary to capture multiple ingredient regions and obtain the corresponding assignment map, and then pool the region features belonging to the same ingredient to identify the ingredients more accurately and meanwhile improve the classification performance. For ingredient-relationship modeling, we utilize the visual ingredient representations as nodes and the semantic similarity between ingredient embeddings as edges to construct an ingredient graph, and then learn their relationships via the graph convolutional network to make label embeddings and visual features interact with each other to improve the performance. Finally, fused features from both ingredient-oriented region features and ingredient-relationship features are used in the following multi-task category-ingredient joint learning. Extensive evaluation on three popular benchmark datasets (ETH Food-101, Vireo Food-172 and ISIA Food-200) demonstrates the effectiveness of our method. Further visualization of ingredient assignment maps and attention maps also shows the superiority of our method
|
650 |
|
4 |
|a Journal Article
|
650 |
|
7 |
|a Food Ingredients
|2 NLM
|
700 |
1 |
|
|a Min, Weiqing
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Zhuo
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Kang, Liping
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wei, Xiaoming
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wei, Xiaolin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Jiang, Shuqiang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 31(2022) vom: 01., Seite 5214-5226
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:31
|g year:2022
|g day:01
|g pages:5214-5226
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2022.3193763
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 31
|j 2022
|b 01
|h 5214-5226
|