07 Jul 2020

In our lab meeting tomorrow, Nishant will introduce the Transformer Models. A Zoom link will be posted to Twist on the morning of the meeting.

Synthesizer: Rethinking Self-Attention in Transformer Models

Abstract: The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is not that important after all. To this end, we propose Synthesizer, a model that learns synthetic attention weights without token-token interactions. Our experimental results show that Synthesizer is competitive against vanilla Transformer models across a range of tasks, including MT (EnDe, EnFr), language modeling (LM1B), abstractive summarization (CNN/Dailymail), dialogue generation (PersonaChat) and Multi-task language understanding (GLUE, SuperGLUE).

Tuesday, July 7th, 09:30 a.m.