Fall 2005
Combine your Model 1 t-table and your bigram language model to do translation from French to English on the first 10 sentences from last assignment's held-out data. Assume:
For efficiency, limit the number of possible English words each French word can be translated as to five.
Try translation with and without language model probabilities. How often do they affect the choice of English output? Do the translations with the LM look better to you?
Your assignment is to implement IBM Model 1, decribed in Section 27 of the Knight tutorial. You will train parameters using Expectation Maximization on a parallel French-English corpus, and evaluating the results on held-out test data in terms of model perplexity. In particular, your implementation should include:
Training data can be found here: /u/cs248/hw7/. This directory contains parallel French-English text from the Canadian Parliament. Both sides (French and English) have been run through a tokenizer to split off punctuation from words.
You should floor all probabilities to a low number, say 1e-07, to avoid numerical problems as well as dead-ends in the EM training. Similarly, you may need to prune low-valued parameters in order to make memory usage and file sizes manageable.
Please turn in:
This is a big data set, and training is time- and memory-intensive.