Speech-based emotion recognition with self-supervised models using attentive channel-wise correlations and label smoothing

Authors:

Sofoklis Kakouros, Themos Stafylakis, Ladislav Mosner, Lukas Burget

Collaborators:

University of Helsinki, Finland
Omilia – Conversational Intelligence, Athens, Greece
Brno University of Technology, Faculty of Information Technology, Speech@FIT, Czechia

Publication Date

November 3, 2022

When recognizing emotions from speech, we encounter two common problems: how to optimally capture emotion-relevant information from the speech signal and how to best quantify or categorize the noisy subjective emotion labels. Self-supervised pre-trained representations can robustly capture information from speech enabling state-of-the-art results in many downstream tasks including emotion recognition. However, better ways of aggregating the information across time need to be considered as the relevant emotion information is likely to appear piecewise and not uniformly across the signal. For the labels, we need to take into account that there is a substantial degree of noise that comes from the subjective human annotations. In this paper, we propose a novel approach to attentive pooling based on correlations between the representations’ coefficients combined with label smoothing, a method aiming to reduce the confidence of the classifier on the training labels. We evaluate our proposed approach on the benchmark dataset IEMOCAP, and demonstrate high performance surpassing that in the literature. The code to reproduce the results is available at github.com/skakouros/s3prl_attentive_correlation.

Omilia