Training Speaker Embedding Extractors Using Multi-Speaker Audio with Unknown Speaker Boundaries

Authors:

Themos Stafylakis, Ladislav Mošner, Oldřich Plchot, Johan Rohdin, Anna Silnova, Lukáš Burget, Jan Černocký
Colaborators:

Brno University of Technology, Faculty of Information Technology, Speech@FIT, Czechia
Omilia – Conversational Intelligence, Athens, Greece

Publication Date

March 29, 2022

In this paper, we demonstrate a method for training speaker embedding extractors using weak annotation. More specifically, we are using the full VoxCeleb recordings and the name of the celebrities appearing on each video without knowledge of the time intervals the celebrities appear in the video. We show that by combining a baseline speaker diarization algorithm that requires no training or parameter tuning, a modified loss with aggregation over segments, and a two-stage training approach, we are able to train a competitive ResNet-based embedding extractor. Finally, we experiment with two different aggregation functions and analyze their behaviour in terms of their gradients.
Omilia