Gholamreza Haffari successfully defended his Ph.D. thesis on August 17th, 2009. His thesis is entitled Machine Learning Approaches To Dealing with Limited Training Data in Statistical Machine Translation. His external examiner was Kevin Knight. His internal examiner was Oliver Schulte. His senior supervisor was Anoop Sarkar and his supervisory committee included: Greg Mori, Shaojun Wang, and Valentine Kabanets. Torsten Moller chaired the defense.
Here is the abstract of his thesis:
Statistical Machine Translation (SMT) models learn how to translate by examining a bilingual parallel corpus containing sentences aligned with their human-produced translations. However, high quality translation output is dependent on the availability of massive amounts of parallel text in the source and target languages. There are a large number of languages that are considered low-density, either because the population speaking the language is not very large, or even if millions of people speak the language, insufficient online resources are available in that language. This thesis covers machine learning approaches for dealing with such situations in statistical machine translation where the amount of available bilingual data is limited.
The problem of learning from insufficient labeled training data has been dealt with in machine learning community under two general frameworks: (i) Semi-supervised Learning, and (ii) Active Learning. The complex nature of machine translation task poses severe challenges to most of the algorithms developed in machine learning community for these two learning scenarios. In this thesis, I develop semi-supervised learning as well as active learning algorithms to deal with the shortage of bilingual training data for Statistical Machine Translation task, specific to cases where there is shortage of bilingual training data.
I assume that we are given access to a monolingual corpus containing large number of sentences in the source language, in addition to a small or moderate sized bilingual corpus. The idea is to take advantage of this readily available monolingual data in building a better SMT model using bootstrapping-style methods: By selecting an important subset of these monolingual sentences, preparing their translations, and using them together with the original sentence pairs to re-train the SMT model effectively. When preparing the translation of the selected sentences, if we use a human annotator, then the framework fits into the Active Learning scenario in machine learning. Instead if we use the (system generated) translations from the output of the SMT model itself, then the framework fits into the semi-supervised learning scenario in machine learning. The key points that I address throughout this thesis are (1) how to choose the important sentences, (2) how to provide their translations (possibly with as little effort as possible), and (3) how to use the newly collected information in training the SMT model.