08 Dec 2019

In our lab meeting next Tuesday, Nadia will practice her thesis defence. Here is the title and abstract:

Translation versus Language Model Pre-training Objectives for Word Sense Disambiguation

Abstract: Contextual word representations pre-trained on large text data have advanced the state of the art in many tasks in Natural Language Processing. Most recent approaches pre-train such models using a language modelling (LM) objective. In this work, we compare and contrast such LM models with the encoder of an encoder-decoder model pre-trained using a machine translation (MT) objective. For certain tasks such as word-sense disambiguation the MT task provides an intuitively better pre-training objective since different senses of a word tend to translate differently into a target language, while word senses might not always need to be distinguished when using an LM objective. Our experimental results on word sense disambiguation provide insight into pre-training objective functions and can be helpful in guiding future work into large-scale pre-trained models for transfer learning in NLP.

Tuesday, December 10th, 10:30 a.m. TASC1 9408.   Title: Translation versus Language Model Pre-training Objectives for Word Sense Disambiguation