Extracting speaker and emotion information from self-supervised speech models via channel-wise correlations

Authors:

Themos Stafylakis, Ladislav Mosner, Sofoklis Kakouros, Oldrich Plchot, Lukas Burget, Jan Cernocky
Colaborators:

Brno University of Technology, Faculty of Information Technology, Speech@FIT, Czechia
Omilia – Conversational Intelligence, Athens, Greece
University of Helsinki, Finland

Publication Date

October 15, 2022

Self-supervised learning of speech representations from large amounts of unlabeled data has enabled state-of-the-art results in several speech processing tasks. Aggregating these speech representations across time is typically approached by using descriptive statistics, and in particular, using the first- and second-order statistics of representation coefficients. In this paper, we examine an alternative way of extracting speaker and emotion information from self-supervised trained models, based on the correlations between the coefficients of the representations – correlation pooling. We show improvements over mean pooling and further gains when the pooling methods are combined via fusion. The code is available at this http URL.

Omilia