Pushing the boundaries of audiovisual word recognition using residual networks and LSTMs

Authors:

Themos Stafylakis, Muhammad Haris Khan, Georgios Tzimiropoulos

Collaborators:

Audias-UAM, Universidad Autonoma de Madrid, Madrid, Spain
CRIM, Montreal (Quebec), Canada
Speechlab, Shanghai Jiao Tong University, China
Omilia – Conversational Intelligence, Athens, Greece
Brno University of Technology, Speech@FIT and IT4I Center of Excellence, Brno, Czechia
Phonexia, Czechia

Publication Date

November 1, 2018

Visual and audiovisual speech recognition are witnessing a renaissance which is largely due to the advent of deep learning methods. In this paper, we present a deep learning architecture for lipreading and audiovisual word recognition, which combines Residual Networks equipped with spatiotemporal input layers and Bidirectional LSTMs. The lipreading architecture attains 11.92% misclassification rate on the challenging Lipreading-In-The-Wild database, which is composed of excerpts from BBC-TV, each containing one of the 500 target words. Audiovisual experiments are performed using both intermediate and late integration, as well as several types and levels of environmental noise, and notable improvements over the audio-only network are reported, even in the case of clean speech. A further analysis on the utility of target word boundaries is provided, as well as on the capacity of the network in modeling the linguistic context of the target word. Finally, we examine difficult word pairs and discuss how visual information helps towards attaining higher recognition accuracy.
Omilia