|
|
|
|
LEADER |
01000naa a22002652 4500 |
001 |
NLM270599444 |
003 |
DE-627 |
005 |
20231224230917.0 |
007 |
cr uuu---uuuuu |
008 |
231224s2017 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2017.2689999
|2 doi
|
028 |
5 |
2 |
|a pubmed24n0902.xml
|
035 |
|
|
|a (DE-627)NLM270599444
|
035 |
|
|
|a (NLM)28371777
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Zhang, Kaihao
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Facial Expression Recognition Based on Deep Evolutional Spatial-Temporal Networks
|
264 |
|
1 |
|c 2017
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Completed 30.07.2018
|
500 |
|
|
|a Date Revised 30.07.2018
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a One key challenging issue of facial expression recognition is to capture the dynamic variation of facial physical structure from videos. In this paper, we propose a part-based hierarchical bidirectional recurrent neural network (PHRNN) to analyze the facial expression information of temporal sequences. Our PHRNN models facial morphological variations and dynamical evolution of expressions, which is effective to extract "temporal features" based on facial landmarks (geometry information) from consecutive frames. Meanwhile, in order to complement the still appearance information, a multi-signal convolutional neural network (MSCNN) is proposed to extract "spatial features" from still frames. We use both recognition and verification signals as supervision to calculate different loss functions, which are helpful to increase the variations of different expressions and reduce the differences among identical expressions. This deep evolutional spatial-temporal network (composed of PHRNN and MSCNN) extracts the partial-whole, geometry-appearance, and dynamic-still information, effectively boosting the performance of facial expression recognition. Experimental results show that this method largely outperforms the state-of-the-art ones. On three widely used facial expression databases (CK+, Oulu-CASIA, and MMI), our method reduces the error rates of the previous best ones by 45.5%, 25.8%, and 24.4%, respectively
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Huang, Yongzhen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Du, Yong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Liang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 26(2017), 9 vom: 15. Sept., Seite 4193-4203
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:26
|g year:2017
|g number:9
|g day:15
|g month:09
|g pages:4193-4203
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2017.2689999
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 26
|j 2017
|e 9
|b 15
|c 09
|h 4193-4203
|