Omilia R&D team presenting at Interspeech 2019

Published on : 09-07-2019

The Omilia R&D team will be presenting three papers in Austria during Interspeech this year as result of the ongoing collaborations and efforts between Omilia and distinguished educational institutions.

Below you will find abstract per paper and links to review and download them; contact us if you would like additional information or to contact the R&D team involved.

 

Detecting Spoofing Attacks Using VGG and SincNet: BUT-Omilia Submission to ASVspoof 2019 Challenge

Brno University of Technology (Czechia) and Omilia Conversational Intelligence

Hossein Zeinali, Themos Stafylakis, Georgia Athanasopoulou, Johan Rohdiν, Ioannis Gkinis, Lukáš Burget, and Jan “Honza” Černocký

Abstract:

In this paper, we present the system description of the joint efforts of Brno University of Technology (BUT) and Omilia-Conversational Intelligence for the ASVSpoof2019 Spoofing and Countermeasures Challenge. The primary submission for Physical access (PA) is a fusion of two VGG networks, trained on single and two-channels features. For Logical access (LA), our primary system is a fusion of VGG and the recently introduced SincNet architecture. The results on PA show that the proposed networks yield very competitive performance in all conditions and achieved 86% relative improvement com- pared to the official baseline. On the other hand, the results on LA showed that although the proposed architecture and training strategy performs very well on certain spoofing attacks, it fails to generalize to certain attacks that are unseen during training.

 

View the complete paper here

 

Privacy-Preserving Speaker Recognition with Cohort Score Normalisation 

Digital Security Department, EURECOM (France), Department of Computer Science, TU Darmstadt (Germany) and Omilia Conversational Intelligence

Andreas Nautsch, Jose Patino, Amos Treiber, Themos Stafylakis, Petr Mizera, Massimiliano Todisco, Thomas Schneider and Nicholas Evans

Abstract:

In many voice biometrics applications there is a requirement to preserve privacy, not least because of the recently enforced General Data Protection Regulation (GDPR). Though progress in bringing privacy preservation to voice biometrics is lagging behind developments in other biometrics communities, recent years have seen rapid progress, with secure computation mechanisms such as homomorphic encryption being applied successfully to speaker recognition. Even so, the computational overhead incurred by processing speech data in the encrypted domain is substantial. While still tolerable for single biometric comparisons, most state-of-the-art systems perform some form of cohort-based score normalisation, requiring many thousands of biometric comparisons. The computational overhead is then prohibitive, meaning that one must accept either degraded performance (no score normalisation) or potential for privacy violations. This paper proposes the first computationally feasible approach to privacy-preserving cohort score normalisation. Our solution is a cohort pruning scheme based on secure multi-party computation which enables privacy-preserving score normalisation using probabilistic linear discriminant analysis (PLDA) comparisons. The solution operates upon binary voice representations. While the binarisation is lossy in biometric rank-1 performance, it supports computationally-feasible biometric rank-n comparisons in the encrypted domain.

 

View the complete paper here

 

 

Self-supervised speaker embeddings   

Brno University of Technology (Czechia) and Omilia Conversational Intelligence

Themos Stafylakis, Johan Rohdin, Oldřich Plchot, Petr Mizera, Lukáš Burget

Abstract:

Contrary to i-vectors, speaker embeddings such as x-vectors are incapable of leveraging unlabelled utterances, due to the classification loss over training speakers. In this paper, we explore an alternative training strategy to enable the use of unlabelled utterances in training. We propose to train speaker embedding extractors via reconstructing the frames of a target speech segment, given the inferred embedding of another speech segment of the same utterance. We do this by attaching to the standard speaker embedding extractor a decoder network, which we feed not merely with the speaker embedding, but also with the estimated phone sequence of the target frame sequence.

The reconstruction loss can be used either as a single objective, or be combined with the standard speaker classification loss. In the latter case, it acts as a regularizer, encouraging generalizability to speakers unseen during training. In all cases, the proposed architectures are trained from scratch and in an end-to-end fashion. We demonstrate the benefits from the proposed approach on the VoxCeleb and Speakers in the Wild Databases, and we report notable improvements over the baseline.

 

View the complete paper here

 

 

 

 

 

 

Arrange a demonstration

Our proven Omni-Channel technology is aimed at:

  • Large Corporations (200+ agents / 4+ million calls per year)
  • Integrators & Contact Center Service Providers

If you represent a relevant business and would like to arrange a demonstration of our technology and learn how it can transform your customer care, fill out our form and we will get in contact with you to get the ball rolling.

  • This field is for validation purposes and should be left unchanged.