On the Use of Semantically-Aligned Speech Representations for Spoken Language Understanding

Authors:

G Laperrière, V Pelloin, M Rouvier, T Stafylakis, Y Estève
Colaborators:

LIA – Avignon Universite, France
LIUM – Le Mans Universite, France
Omilia – Conversational Intelligence, Greece

Publication Date

October 10, 2022

In this paper we examine the use of semantically-aligned speech representations for end-to-end spoken language understanding (SLU). We employ the recently-introduced SAMU-XLSR model, which is designed to generate a single embedding that captures the semantics at the utterance level, semantically aligned across different languages. This model combines the acoustic frame-level speech representation learning model (XLS-R) with the Language Agnostic BERT Sentence Embedding (LaBSE) model. We show that the use of the SAMU-XLSR model instead of the initial XLS-R model improves significantly the performance in the framework of end-to-end SLU. Finally, we present the benefits of using this model towards language portability in SLU.
Omilia