9 - Discriminative Training
from III - Advanced Topics
Published online by Cambridge University Press: 05 June 2012
Summary
This book presents a variety of statistical machine translation models, such as word-based models (Chapter 4), phrase-based models (Chapter 5), and tree-based models (Chapter 11). When we describe these models, we mostly follow a generative modeling approach. We break up the translation problem (sentence translation) into smaller steps (say, into the translation of phrases) and build component models for these steps using maximum likelihood estimation.
By decomposing the bigger problem into smaller steps we stay within a mathematically coherent formulation of the problem – the decomposition is done using rules such as the chain rule or the Bayes rule. We throw in a few independence assumptions which are less mathematically justified (say, the translation of one phrase is independent of the others), but otherwise the mathematically sound decomposition gives us a straightforward way to combine the different component models.
In this chapter, we depart from generative modeling and embrace a different mindset. We want to directly optimize translation performance. We use machine learning methods to discriminate between good translations and bad translations and then to adjust our models to give preference to good translations.
To give a quick overview of the approach: Possible translations of a sentence, so-called candidate translations, are represented using a set of features. Each feature derives from one property of the translation, and its feature weight indicates its relative importance. The task of the machine learning method is to find good feature weights.
- Type
- Chapter
- Information
- Statistical Machine Translation , pp. 249 - 288Publisher: Cambridge University PressPrint publication year: 2009