09 Mar 2021

In our lab meeting tomorrow, Pooya will give us a review of recent research on Language Model Pretraining

A review on modern approaches for Language Model Pretraining

Abstract: Language model pretraining followed by downstream fine-tuning has led to significant performance gains on virtually every NLP task and has become a standard procedure before training downstream models. Since emergence of the BERT paper, language model pretraining has become a very active research area along with significant interest from industry given its impact on end products. Consequently, different language model pretraining approaches have been proposed recently and this field is progressing very fast. In this presentation, we briefly take a look into some ideas behind these modern approaches for pretraining.

Tuesday, Mar 9th, 09:30 a.m.