Publication
INTERSPEECH 2021
Conference paper

Reducing exposure bias in training recurrent neural network transducers

View publication

Abstract

When recurrent neural network transducers (RNNTs) are trained using the typical maximum likelihood criterion, the prediction network is trained only on ground truth label sequences. This leads to a mismatch during inference, known as exposure bias, when the model must deal with label sequences containing errors. In this paper we investigate approaches to reducing exposure bias in training to improve the generalization of RNNT models for automatic speech recognition (ASR). A label-preserving input perturbation to the prediction network is introduced. The input token sequences are perturbed using SwitchOut and scheduled sampling based on an additional token language model. Experiments conducted on the 300-hour Switchboard dataset demonstrate their effectiveness. By reducing the exposure bias, we show that we can further improve the accuracy of a highperformance RNNT ASR model and obtain state-of-the-art results on the 300-hour Switchboard dataset.

Date

Publication

INTERSPEECH 2021

Share