News

Anahita's presentation on Neural Phrase-based MT.
12 Jun 2018

In our lab meeting tomorrow, Anahita will present another paper from ICLR 2018 about Neural Phrase-based Machine Translation. Here is the title and abstract of the paper:

Title: Towards Neural Phrase-based Machine Translation

Abstract: In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep- WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English- German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages.

The paper can be found here: https://openreview.net/forum?id=HktJec1RZ

Wednesday, June 13th, 10-11 AM, Location: TASC1 9408.

Anoop will talk about computational decipherment
05 Jun 2018

This week in our lab meeting, Anoop will talk about computational decipherment. Here’s the title and abstract of his talk:

Title: Computational Decipherment of Ancient Scripts

Abstract: A brief overview of methods in the computational decipherment of ancient scripts. We will compare and contrast with more recent unsupervised neural machine translation models.

Wednesday, June 6th, 10-11 AM, Location: TASC1 9408.

Lindsey's Talk on Abstractive sentence summarization.
30 May 2018

In our lab meeting this week, Lindsey presented a paper from Facebook team today. Here is the abstract of his talk:

Abstract: Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.

The paper can be found here: https://arxiv.org/pdf/1509.00685.pdf

Wednesday, May 30th, 10-11 AM, Location: TASC1 9408.

Jetic's presentation on Neural Process Networks.
23 May 2018

For our lab meeting this week, Jetic will talk about another exciting paper from ICLR 2018. Here is the title and abstract of the paper. Title: Simulating Action Dynamics with Neural Process Networks

Abstract: Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.

Wednesday, May 23rd, 10-11 AM, Location: TASC1 9408.

Nishant will talk about Decipherment.
16 May 2018

For our lab meeting this week, Nishant will talk about improving decipherment with RNNs . Here’s the title and abstract of his talk:

Title: Can Recurrent Neural Networks help improve Decipherment ?

Abstract: Decipherment in NLP has always been an interesting topic of research. There have been quite a few successful approaches inspired by statistical machine translation (SMT) methodologies to address the decipherment problem in the past. With the rampant rise of Neural Machine Translation (NMT) to prominence and the encouraging results neural language model powered systems have produced, approaching the decipherment problem with deep learning is only logical. In this talk, Nishant will discuss the progress of his research on his on Neural Decipherment, a few solved ciphers and a comparison of his results with notable previous work.

Wednesday, May 16th, 10-11 AM, Location: TASC1 9408.

Recent Publications