Speaker verification using end-to-end adversarial language adaptation

Authors:

Johan Rohdin, Themos Stafylakis, Anna Silnova, Hossein Zeinali, Lukas Burget, Oldrich Plchot

Collaborators:

Audias-UAM, Universidad Autonoma de Madrid, Madrid, Spain
CRIM, Montreal (Quebec), Canada
Speechlab, Shanghai Jiao Tong University, China
Omilia – Conversational Intelligence, Athens, Greece
Brno University of Technology, Speech@FIT and IT4I Center of Excellence, Brno, Czechia
Phonexia, Czechia

Publication Date

November 6, 2018

In this paper we investigate the use of adversarial domain adaptation for addressing the problem of language mismatch between speaker recognition corpora. In the context of speaker verification, adversarial domain adaptation methods aim at minimizing certain divergences between the distribution that the utterance-level features follow (i.e. speaker embeddings) when drawn from source and target domains (i.e. languages), while preserving their capacity in recognizing speakers. Neural architectures for extracting utterance-level representations enable us to apply adversarial adaptation methods in an end-to-end fashion and train the network jointly with the standard cross-entropy loss. We examine several configurations, such as the use of (pseudo-)labels on the target domain as well as domain labels in the feature extractor, and we demonstrate the effectiveness of our method on the challenging NIST SRE16 and SRE18 benchmarks.

Omilia