Deep Audio-Visual Speech Recognition

The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentenc...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 12 vom: 24. Dez., Seite 8717-8727
1. Verfasser: Afouras, Triantafyllos (VerfasserIn)
Weitere Verfasser: Chung, Joon Son, Senior, Andrew, Vinyals, Oriol, Zisserman, Andrew
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000naa a22002652 4500
001 NLM292119054
003 DE-627
005 20231225072401.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2018.2889052  |2 doi 
028 5 2 |a pubmed24n0973.xml 
035 |a (DE-627)NLM292119054 
035 |a (NLM)30582526 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Afouras, Triantafyllos  |e verfasserin  |4 aut 
245 1 0 |a Deep Audio-Visual Speech Recognition 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 09.11.2022 
500 |a Date Revised 19.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Chung, Joon Son  |e verfasserin  |4 aut 
700 1 |a Senior, Andrew  |e verfasserin  |4 aut 
700 1 |a Vinyals, Oriol  |e verfasserin  |4 aut 
700 1 |a Zisserman, Andrew  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 44(2022), 12 vom: 24. Dez., Seite 8717-8727  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:44  |g year:2022  |g number:12  |g day:24  |g month:12  |g pages:8717-8727 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2018.2889052  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 44  |j 2022  |e 12  |b 24  |c 12  |h 8717-8727