News

Logan's talk about Synchronous Grammar Lexicalization
18 Oct 2017

Logan will be talking about Synchronous Grammar Lexicalization in our lab meeting tomorrow(Oct 19th). Here is the abstract of his talk:

This work presents two results in the field of formal language theory. The first result shows that the class of synchronous context free grammars (SCFG) cannot prefix lexicalize itself; the second shows that SCFG is prefix lexicalized by the class of synchronous tree-adjoining grammars (STAG). We present an algorithm for converting an SCFG to an equivalent prefix lexicalized STAG, and demonstrate that the conversion does not excessively increase the size or parse complexity of the grammar. We conclude with a discussion of some practical applications to word alignment and hierarchical translation decoding.

Thursday, Oct 19, 10-11 AM, Location: TASC1 9408.

Ashkan and Andrei's Talk about their ongoing research
12 Oct 2017

Andrei is going to defend his thesis next week and Ashkan is planning to have his depth exam shortly afterwards. Tomorrow (October 12th), in our lab meeting, they will present what they have done so far.

Ashkan’s Abstract: Studies on Machine Translation (MT) has a long history, but most of the works in this area assumes we can have access to the entire sentences. As a result, it is not practical to apply them on Real-Time Machine Translation where the objective is to start translation before receiving the full sentences. Divergent syntax of different languages makes it a great challenge for both humans and machines to start translating while new inputs are still being received. Over the past few years, the great success of using deep neural networks in Real-Time translation systems, led this field to evolve in a completely new direction and improved the results; However, many of the problems from conventional systems remained unsolved. This talk provides a review over the latest methods of utilizing neural attention models for the task of simultaneous machine translation.

Andrei’s Abstract: Dependency parsing is an important task in NLP, and it is used in many downstream tasks for analyzing the semantic structure of sentences. Analyzing very large corpora in a reasonable amount of time, however, requires a fast parser. In this thesis we develop a transition-based dependency parser with a neural-network decision function whic outperforms spaCy, Stanford CoreNLP, and MALTParser in terms of speed while having a comparable, and in some cases better, accuracy. We also develop several variations of our model to investigate the trade-off between accuracy and speed. This leads to a model with a greatly reduced feature set which is much faster but less accurate, as well as a more complete model involving a BiLSTM simultaneously trained to produce POS tags which is more accurate, but much slower. We compare the accuracy and speed of our different parser models against the three mentioned parsers on the Penn Treebank, Universal Dependencies English, and Ontonotes datasets using two different dependency tree representations to show how our parser competes on data from very different domains. Our experimental results reveal that our main model is much faster than the 3 external parsers while also being more accurate; our reduced feature set model is significantly faster while remaining competitive in terms of accuracy; and our BiLSTM-using model is somewhat slower than CoreNLP although it is significantly more accurate.

Jetic's Talk about an easily extendable HMM word aligner
05 Oct 2017

In our lab meeting tomorrow (October 5th), Jetic will talk about an easily extendable HMM word aligner. Here’s the abstarct of his talk: Abstract: We present a new word aligner with built-in support for alignment types. It is an open source software that can be easily extended to use models of users’ own design. We expect it to suffice the academics as well as scientists working in the industry to do word alignment and experiments on their own new models. The basic designs and structures of the Aligner will be introduced.

Hasan's presentation on Training Data Annotation for Segmentation Classification
28 Sep 2017

In the lab meeting on september 28 (Thursday), Hasan will talk about his M.Sc. Thesis about Training Data Annotation for Segmentation Classification in simultaneous translation. Here’s the abstarct of his talk: Abstract: Segmentation of the incoming speech stream and translating segments incrementally is a commonly used technique that improves latency in spoken language translation. Previous work has explored creating training data for segmentation by finding segments that maximize translation quality with a user-defined bound on segment length.

In this work, we provide a new algorithm, using Pareto-optimality, for finding good segment boundaries that can balance the trade-off between latency versus translation quality. Our experimental results show that we can provide qualitatively better segments that improve latency without substantially hurting translation quality.

Golnar's Talk on Extractive Question Answering
21 Sep 2017

In the lab meeting on september 21 (Thursday), Golnar will talk about utilizing recurrent span representations for Extractive Question Answering.

Recent Publications