News

Anahita's talk about utilizing neural networks in Hidden Markov Alignment Models
22 Nov 2017

Anahita in our next lab meeting will talk about various ways of applying Neural Networks in Hodden Markov Models . here’s the abstract her of talk:

We present how Hidden Markov alignment model can be neuralized. In particular, we provide neural network-based emission and transition models. The standard forward-backward algorithm still applies to compute the posterior probabilities. We then backpropagate the posteriors through the networks to maximize the likelihood of the data.

Thursday, Nov. 23, 10-11 AM, Location: TASC1 9408.

Wasifa's talk about Employing Neural Hierarchical Model for Abstractive Text Summarization
08 Nov 2017

Wasifa in our next lab meeting will talk about employing neural hierarchical Model for abstractive Summarization. here’s the abstract her of talk:

As growth of online data in the form of news, social media, email, and text continues, automatic summarization is integral in generating a condensed form to get gist of the original text. While most of the earlier works on automatic summarization use extractive approach to identify the most important parts of the document, some of recent research works focus on the more challenging task of making the summaries more abstractive, requiring effective paraphrasing and generalization steps. In this work, we propose an encoder-decoder attentional recurrent neural network model to achieve automatic abstractive summarization. Although most of the recently proposed methods have already used neural sequence to sequence models, two issues that still need to be addressed are- how to focus on the most important portions of the input when generating the output words and how to handle the out-of-vocabulary words not contained in the fixed-size target list. Unlike other NLP tasks like machine translation which requires encoding all input information to produce the translation, summarization needs to extract only the key information while ignoring the irrelevant portions that might degrade overall summary quality. We use a hierarchical word-to-sentence encoder to jointly learn word and sentence importance using features like- content richness, salience, and position. During decoding, attention mechanism operates at both sentence and word levels. To address the problem of unknown words, we learn a word-to-character model.

Thursday, Nov. 9, 10-11 AM, Location: TASC1 9408.

Anoop's talk about Machine Reading of Natural Language and Interactive Visualization
25 Oct 2017

In our lab meeting tommorow, Anoop will talk about a new perspective in summarizing a large amount of text and how we can visualize it. here’s the abstract of talk:

In natural language processing, the summarization of information in a large amount of text has typically been viewed as a type of natural language generation problem, e.g. “produce a 250 word summary of some documents based on some input query”. An alternative view, which will be the focus of this talk, is to use natural language parsing to extract facts from a collection of documents and then use information visualization to provide an interactive summarization of these facts.

The first step is to extract detailed facts about events from natural language text using a predicate-centered view of events (who did what to whom, when and how). We exploit semantic roles in order to create a predicate-centric ontology for entities which is used to create a knowledge base of facts about entities and their relationship with other entities.

The next step is to use information visualization to provide a summarization of the facts in this automatically extracted knowledge base. The user can interact with the visualization to find summaries that have different granularities. This enables the discovery of extremely uncommon facts easily.

We have used this methodology to build an interactive visualization of events in human history by machine reading Wikipedia articles. I will demo the visualization and describe the results of a user study that evaluates this interactive visualization for a summarization task.

Thursday, Oct 26, 10-11 AM, Location: TASC1 9408.

Logan's talk about Synchronous Grammar Lexicalization
18 Oct 2017

Logan will be talking about Synchronous Grammar Lexicalization in our lab meeting tomorrow(Oct 19th). Here is the abstract of his talk:

This work presents two results in the field of formal language theory. The first result shows that the class of synchronous context free grammars (SCFG) cannot prefix lexicalize itself; the second shows that SCFG is prefix lexicalized by the class of synchronous tree-adjoining grammars (STAG). We present an algorithm for converting an SCFG to an equivalent prefix lexicalized STAG, and demonstrate that the conversion does not excessively increase the size or parse complexity of the grammar. We conclude with a discussion of some practical applications to word alignment and hierarchical translation decoding.

Thursday, Oct 19, 10-11 AM, Location: TASC1 9408.

Andrei Vacariu MSc Thesis Defence
17 Oct 2017

On Oct 17th at 10am in ASB 9705, Andrei Vacariu will defend his MSc thesis on the topic of “A High-Throughput Dependency Parser”.

Abstract:

Dependency parsing is an important task in NLP, and it is used in many downstream tasks for analyzing the semantic structure of sentences. Analyzing very large corpora in a reasonable amount of time, however, requires a fast parser. In this thesis we develop a transition-based dependency parser with a neural-network decision function which outperforms spaCy, Stanford CoreNLP, and MALTParser in terms of speed while having a comparable, and in some cases better, accuracy. We also develop several variations of our model to investigate the trade-off between accuracy and speed. This leads to a model with a greatly reduced feature set which is much faster but less accurate, as well as a more complex model involving a BiLSTM simultaneously trained to produce POS tags which is more accurate, but much slower. We compare the accuracy and speed of our different parser models against the three mentioned parsers on the Penn Treebank, Universal Dependencies English, and Ontonotes datasets using two different dependency tree representations to show how our parser competes on data from very different domains. Our experimental results reveal that our main model is much faster than the 3 external parsers while also being more accurate; our reduced feature set model is significantly faster while remaining competitive in terms of accuracy; and our BiLSTM-using model is somewhat slower than CoreNLP although it is significantly more accurate.

M.Sc. Examining Committee:

  • Dr. Anoop Sarkar, Senior Supervisor
  • Dr. Nick Sumner, Supervisor
  • Dr. Fred Popowich, Internal Examiner
  • Dr. Parmit Chilana, Chair

Recent Publications