29 Sep 2020

In our lab meeting tomorrow, Ashkan will introduce his work on speech translation.

A Zoom link will be posted to Twist on the morning of the meeting.

Effectively pretraining a speech translation decoder with Machine Translation data

Abstract: Directly translating from speech to text using an End-to-End approach is still challenging for many language pairs, due to lack of sufficient data. Although pretraining the encoder parameters using Automatic Speech Recognition (ASR) task improves the results in low resource settings, attempting to use pretrained parameters from Neural Machine Translation (NMT) task was mainly unsuccessful in previous works. In this paper, we will show that by using an adversarial regularizer we can bring closer the encoder representations of the ASR and NMT tasks even though they are in different modalities, and how this helps us to effectively use a pretrained NMT decoder for speech translation.

Tuesday, September 29nd, 09:30 a.m.