02 Jul 2019

Lindsey will present in this week’s lab meeting.

Understanding RNN States with Predictive Semantic Encodings and Adaptive Representations

Abstract: Recurrent neural networks are an effective and prevalent tool used to model sequential data such as natural language text. However, their deep nature and massive number of parameters pose a challenge for those intending to study precisely how they work. This is especially the case for researchers with the expertise to understand the mathematics behind these models at a macroscopic level, who often lack the tools to expose the microscopic details of what information they internally represent.

We present a combination of visual techniques to show some of the inner workings of recurrent neural networks and facilitate their study at a fine level of detail. Specifically, we introduce a consistent visual representation for vector data that is adaptive with respect to the available visual space. We tackle the problem of assigning meaning to hidden states by learning which outputs they produce and encoding this learned representation in a way that is quickly interpreted and relates to other elements of the visual design. These techniques are combined into a fully interactive visualization tool which is demonstrated to improve our understanding of common natural language processing tasks.

Tuesday, July 2nd, 12:00 p.m. TASC1 9408.