News

Lab meeting cancelled for reading break
18 Feb 2020

Due to the spring reading break, there is no lab meeting scheduled for tomorrow, 18 February. Meetings will resume as usual next week.

Nested Named Entity Recognition
11 Feb 2020

In our lab meeting tomorrow, Vincent will give a brief talk about his research in Nested Named Entity Recognition.

Here are the title and abstract:

Nested Named Entity Recognition

Abstract: Many named entities may contain other named entities inside them, especially in the biomedical domain. For instance, “Bank of China and University of Washington”, both organizations with nested locations. However, due to some technological reasons, the nested structure had been arbitrarily ignored for a long time, and only the outmost entities were considered, which literally lost the original semantic meanings. Since Manning firstly proposed a discriminative constituency parser for nested named entities recognition in 2009, many methods have been employed to detect them successfully. In this talk, we will review the main datasets for Nested NER and some approaches published in recent conferences.

Tuesday, Feb 11th, 09:30 a.m. TASC1 9408.

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation
04 Feb 2020

In our lab meeting tomorrow, Ashkan will discuss Arivazhagan et al. 2019 on attention for simultaneous machine translation.

Here are the title and abstract:

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation

Abstract: Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.

Tuesday, Feb 4th, 09:30 a.m. TASC1 9408.

Bootstrapping via Graph Propagation
28 Jan 2020

In our lab meeting tomorrow, Anoop will discuss bootstrapping via graph propagation.

Here are the title and abstract:

Bootstrapping via Graph Propagation

Abstract: *In natural language processing, the bootstrapping algorithm introduced by David Yarowsky (25 years ago!) is a discriminative unsupervised learning algorithm that uses some seed rules to bootstrap a classifier (this is the ordinary sense of bootstrapping which is distinct from the Bootstrap in statistics). The Yarowsky algorithm works remarkably well on a wide variety of NLP classification tasks such as distinguishing between word senses and deciding if a noun phrase is an organization, location, or person.

Extending previous attempts at providing an objective function optimization view of Yarowsky, we show that bootstrapping a classifier from a small set of seed rules can be viewed as the propagation of labels between examples via features shared between them. This talk introduces a novel variant of the Yarowsky algorithm based on this view. It is a bootstrapping learning method which uses a graph propagation algorithm with a well defined per-iteration objective function that incorporates the cautious behaviour of the original Yarowsky algorithm.

The experimental results show that our proposed bootstrapping algorithm achieves state of the art performance or better on several different natural language data sets, outperforming other unsupervised methods such as the EM algorithm. We show that cautious learning is an important principle in unsupervised learning, however we do not understand it well, and we show that the Yarowsky algorithm can outperform or match co-training without any reliance on multiple views.*

Tuesday, Jan 28th, 09:30 a.m. TASC1 9408.   Title: Bootstrapping via Graph Propagation

Jetic will practice his PhD Depth Examination
21 Jan 2020

In our lab meeting tomorrow, Jetic will give a depth presentation on Knowledge-Base Systems.

Here is the title and abstract:

Neural Knowledge-Based Systems in NLP

Abstract: Recent years have seen wide applications of external knowledge sources in end-to-end NLP tasks. Statistical and rule-based methods often rely on utilisation of logical representations, which do allow for easier integration but is limited when it comes to complex implicit inference. For neural models, the ability to better handle ambiguous information has allowed for far more powerful inference and deduction capabilities, but since neural network’s internal states are often represented as uninterpretable vectors, more efforts are necessary to leverage structured knowledge and develop more extensible and powerful intelligent agents. This paper highlights recent advances in utilisation of knowledge bases in neural models, and also important research topics around neural knowledge bases themselves.

Tuesday, Jan 21st, 09:30 a.m. TASC1 9408.   Title: Neural Knowledge-Based Systems in NLP

Recent Publications