News

Hassan's talk about an overview of the recent Syntax-aware Neural Machine Translation systems
14 Feb 2018

In our lab meeting tommorow, Hassan will give us an overview of the syntax-aware NMT systems. here’s the abstract of his talk:

The idea to take advantage of syntax in Machine Translation to produce better translations has been suggested many years ago (Williams et al., 2016) and was used by many state-of-the-art Statistical Machine Translation models of the time, e.g. Hiero (Chiang, 2005), SAMT (Zollmann and Venugopal, 2006) and “GHKM rules” based models (Galley et al., 2006). Neural Machine Translation systems, however, were initially introduced without explicitly taking the syntax into account while performing the translation. Recently, Syntax-aware Neural Machine Translation systems are examining the idea to take advantage of the syntax of the sentences being translated while doing the best practices of NMT. This talk is going to perform a brief overview of the recent works done in the area of Syntax-aware Neural Machine Translation systems.

Wednesday, Feb 14, 10-11 AM, Location: TASC1 9408.

Our weekly lab meetings for this semester is scheduled for Wednesdays 10-11
06 Feb 2018

This semester We have scheduled our lab meeting for Wednesdays 10-11 A.M. in TASC1 9408. For this week everyone will give a brief presentation about their ongoing research.

Anahita's talk about utilizing neural networks in Hidden Markov Alignment Models
22 Nov 2017

Anahita in our next lab meeting will talk about various ways of applying Neural Networks in Hodden Markov Models . here’s the abstract her of talk:

We present how Hidden Markov alignment model can be neuralized. In particular, we provide neural network-based emission and transition models. The standard forward-backward algorithm still applies to compute the posterior probabilities. We then backpropagate the posteriors through the networks to maximize the likelihood of the data.

Thursday, Nov. 23, 10-11 AM, Location: TASC1 9408.

Wasifa's talk about Employing Neural Hierarchical Model for Abstractive Text Summarization
08 Nov 2017

Wasifa in our next lab meeting will talk about employing neural hierarchical Model for abstractive Summarization. here’s the abstract her of talk:

As growth of online data in the form of news, social media, email, and text continues, automatic summarization is integral in generating a condensed form to get gist of the original text. While most of the earlier works on automatic summarization use extractive approach to identify the most important parts of the document, some of recent research works focus on the more challenging task of making the summaries more abstractive, requiring effective paraphrasing and generalization steps. In this work, we propose an encoder-decoder attentional recurrent neural network model to achieve automatic abstractive summarization. Although most of the recently proposed methods have already used neural sequence to sequence models, two issues that still need to be addressed are- how to focus on the most important portions of the input when generating the output words and how to handle the out-of-vocabulary words not contained in the fixed-size target list. Unlike other NLP tasks like machine translation which requires encoding all input information to produce the translation, summarization needs to extract only the key information while ignoring the irrelevant portions that might degrade overall summary quality. We use a hierarchical word-to-sentence encoder to jointly learn word and sentence importance using features like- content richness, salience, and position. During decoding, attention mechanism operates at both sentence and word levels. To address the problem of unknown words, we learn a word-to-character model.

Thursday, Nov. 9, 10-11 AM, Location: TASC1 9408.

Anoop's talk about Machine Reading of Natural Language and Interactive Visualization
25 Oct 2017

In our lab meeting tommorow, Anoop will talk about a new perspective in summarizing a large amount of text and how we can visualize it. here’s the abstract of talk:

In natural language processing, the summarization of information in a large amount of text has typically been viewed as a type of natural language generation problem, e.g. “produce a 250 word summary of some documents based on some input query”. An alternative view, which will be the focus of this talk, is to use natural language parsing to extract facts from a collection of documents and then use information visualization to provide an interactive summarization of these facts.

The first step is to extract detailed facts about events from natural language text using a predicate-centered view of events (who did what to whom, when and how). We exploit semantic roles in order to create a predicate-centric ontology for entities which is used to create a knowledge base of facts about entities and their relationship with other entities.

The next step is to use information visualization to provide a summarization of the facts in this automatically extracted knowledge base. The user can interact with the visualization to find summaries that have different granularities. This enables the discovery of extremely uncommon facts easily.

We have used this methodology to build an interactive visualization of events in human history by machine reading Wikipedia articles. I will demo the visualization and describe the results of a user study that evaluates this interactive visualization for a summarization task.

Thursday, Oct 26, 10-11 AM, Location: TASC1 9408.

Recent Publications