In our lab meeting tomorrow, Logan will discuss recent work by Merrill (2019) which attempts to find equivalences between neural network architectures and automata.
Here are the title and abstract:
Sequential Neural Networks as Automata
Abstract: This work attempts to explain the types of computation that neural networks can perform by relating them to automata. We first define what it means for a real-time network with bounded precision to accept a language. A measure of network memory follows from this definition. We then characterize the classes of languages acceptable by various recurrent networks, attention, and convolutional networks. We find that LSTMs function like counter machines and relate convolutional networks to the subregular hierarchy. Overall, this work attempts to increase our understanding and ability to interpret neural networks through the lens of theory. These theoretical insights help explain neural computation, as well as the relationship between neural networks and natural language grammar.
Tuesday, Mar 3rd, 09:30 a.m. TASC1 9408.