How to Improve Your Speaker Embeddings Extractor in Generic Toolkits

Authors:

Hossein Zeinali, Lukas Burget, Johan Rohdin, Themos Stafylakis, Jan Cernocky

Collaborators:

Audias-UAM, Universidad Autonoma de Madrid, Madrid, Spain
CRIM, Montreal (Quebec), Canada
Speechlab, Shanghai Jiao Tong University, China
Omilia – Conversational Intelligence, Athens, Greece
Brno University of Technology, Speech@FIT and IT4I Center of Excellence, Brno, Czechia
Phonexia, Czechia

Publication Date

November 5, 2018

Recently, speaker embeddings extracted with deep neural networks became the state-of-the-art method for speaker verification. In this paper we aim to facilitate its implementation on a more generic toolkit than Kaldi, which we anticipate to enable further improvements on the method. We examine several tricks in training, such as the effects of normalizing input features and pooled statistics, different methods for preventing overfitting as well as alternative non-linearities that can be used instead of Rectifier Linear Units. In addition, we investigate the difference in performance between TDNN and CNN, and between two types of attention mechanism. Experimental results on Speaker in the Wild, SRE 2016 and SRE 2018 datasets demonstrate the effectiveness of the proposed implementation.
Omilia