News

Nishant will defend his Masters thesis
25 Jul 2018

On July 26th 2:30pm in TASC1 9204 W, Nishant will defend his Masters thesis on the topic “DECIPHERMENT OF SUBSTITUTION CIPHERS USING NEURAL LANGUAGE MODELS”.

Abstract:

The decipherment of homophonic substitution ciphers using language models (LMs) is a well-studied task in Natural Language Processing (NLP). Previous work in this topic score short local spans of possible plaintext decipherments using n-gram LMs. The most widely used technique is the use of a beam search with n-gram LMs proposed by Nuhn et al. (2013). We propose a new approach on decipherment using a beam search algorithm that scores the entire candidate plaintext at each step with a neural LM. We augment beam search with a novel rest cost estimation that exploits the predictive power of a neural LM. This work, to our knowledge, is the first to use a large pretrained neural language model for decipherment. Our neural decipherment approach outperforms the state-of-the-art n-gram based methods on many different ciphers. On challenging ciphers such as the Beale cipher, our system reports significantly lower error rates with much smaller beam sizes.

M.Sc. Examining Committee:

Dr. Anoop Sarkar, Senior Supervisor Dr. Fred Popowich, Supervisor Dr. David Campbell, Examiner Dr. Keval Vora, Chair

Nishant and Fariha will have practice talks this week
24 Jul 2018

In our lab meeting this week, Nishant and Fariha will give us a practice talk for 30 minutes each. Nishant will defend his master thesis on Thursday this week, and Fariha will have her depth exam on Wednesday next week. Here are the title and abstract of their presentation:

Nishant: Decipherment of Substitution Ciphers with Neural Language Models Abstract: The decipherment of homophonic substitution ciphers using language models (LMs) is a well-studied task in Natural Language Processing (NLP). Previous work in this topic score short local spans of possible plaintext decipherments using n-gram LMs. The most widely used technique is the use of a beam search with n-gram LMs proposed by Nuhn et al. (2013). We propose a new approach on decipherment using a beam search algorithm that scores the entire candidate plaintext at each step with a neural LM. We augment beam search with a novel rest cost estimation that exploits the predictive power of a neural LM. This work, to our knowledge, is the first to use a large pretrained neural language model for decipherment. Our neural decipherment approach outperforms the state-of-the-art n-gram based methods on many different ciphers. On challenging ciphers such as the Beale cipher, our system reports significantly lower error rates with much smaller beam sizes.

Fariha: GENERATING TEXTUAL DESCRIPTION FROM TIME SERIES DATA Abstract: Natural language generation (NLG), which is a subfield of natural language processing (NLP), deals with non-linguistic and linguistic representations to construct written text in natural language. The generated text can be presented in the form of reports, summaries, explanations, messages, etc. Various approaches are proposed for analyzing numerical or time series data to produce a written text description. In this presentation, we will explore different approaches from the area of NLG and data-to-text technology that are proposed for time series to automatically generate natural language responses that will reflect the subject matter expertise. We will shed light on the popular approaches including knowledge-based and machine learning techniques applied to identify relevant content from time series datasets to generate textual descriptions.

Wednesday, July 25th, 10:00 a.m. TASC1 9408.

Nadia will talk about a new method for using deep neural architecture in word representations
03 Jul 2018

In our lab meeting this week, Nadia will talk about making use of deep neural networks for encoding semantic information in word representations. Here is title and abstract of her talk:

Title: Deep contextualized word representations

Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bi-directional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment, and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

Link to paper: https://arxiv.org/abs/1802.05365

Wednesday, July 3rd, 10:00 a.m. TASC1 9408.

'Mahmoud from Computational Logic Lab will talk about dynamic gated graph neural networks
26 Jun 2018

This week in our lab meeting, Mahmoud from Computational Logic Lab will have a presentation about dynamic gated graph neural networks. Here is the title and abstract of his talk:

Title: Scene graph generation with dynamic gated graph neural networks. Abstract: In spite of recent advances in Visual Question Answering (VQA), current VQA models often fail on sufficiently new samples, converge on an answer after listening to only a few words of the question, and do not alter their answers across different images. Most of these models try to build a loose association between the given training QA pairs and images in an end-to-end framework. But, to achieve success at VQA task, a model must be able to recognize the objects and their visual relationships in an image, identify the attributes of these objects, and reason about the role of each object in the scene context. To address these issues, we propose a new deep model, called Dynamic Gated Graph Neural Networks (D-GGNN), for extracting a scene graph for an image, given a set of bounding box proposals. A scene graph is a visually-grounded digraph for an image, where the nodes represent the objects and the edges show the relationships between them. Unlike the recently proposed Gated Graph Neural Networks (GGNN), the D-GGNN can be applied to an input image when only partial relationship information, or none at all, is known. In each training episode, the D-GGNN sequentially builds a candidate scene graph for a given training input and labels additional nodes and edges of the graph. The scene graph is constructed using a deep reinforcement learning framework, where the actions are choosing labels for edges and nodes, and the rewards are defined by the match between the ground-truth annotations in the data and the labels assigned at a point in the search. The predicted scene graph is then used to answer questions about the image using an attention mechanism, where we compute an attention weight for each object of the scene graph based on the given question. Our preliminary experiments show promising results on both VQA and scene graph generation tasks.

Wednesday, June 26th, 10-11 AM, Location: TASC1 9408.

Anahita's presentation on Neural Phrase-based MT.
12 Jun 2018

In our lab meeting tomorrow, Anahita will present another paper from ICLR 2018 about Neural Phrase-based Machine Translation. Here is the title and abstract of the paper:

Title: Towards Neural Phrase-based Machine Translation

Abstract: In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep- WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English- German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages.

The paper can be found here: https://openreview.net/forum?id=HktJec1RZ

Wednesday, June 13th, 10-11 AM, Location: TASC1 9408.

Recent Publications