Publication
ICASSP 2021
Conference paper

Generalized Knowledge Distillation from An Ensemble of Specialized Teachers Leveraging Unsupervised Neural Clustering

View publication

Abstract

This paper proposes an improved generalized knowledge distillation framework with multiple dissimilar teacher networks, each of which is specialized for a specific domain, to make a deployable student network more robust to challenging acoustic environments. In this paper, we first address a method to partition the training data for constructing ensembles of the teachers from unsupervised neural clustering with features based on context-dependent phonemes representing each acoustic domain. Second, we illustrate how a single student network designed from partitioned data is effectively trained with multiple specialized teachers. During the training step, the weights of the student network are updated using a composite two-part cross entropy loss obtained from a pair consisting of a specialized teacher corresponding to input speech and a generalized teacher trained with a balanced data set. Unlike system combination methods, we aim to incorporate the benefits from multiple models into a single student network via knowledge distillation that does not increase any computational costs during the decoding time. The improvement of the proposed technique is shown on acoustically diverse signals contaminated by challenging practical noises.

Date

Publication

ICASSP 2021

Authors

Share