Fall 2011
Implement the fertility-HMM alignment model of Zhao and Gildea (2010) using Gibbs sampling. Compare Viterbi alignments from Model1 to the result of the fertility-HMM model with Gibbs sampling for the first few sentences of the test corpus.
Your assignment is to implement IBM Model 1. You will train parameters using Expectation Maximization on a parallel French-English corpus, and evaluating the results on held-out test data in terms of model perplexity. In particular, your implementation should include:
Training data can be found here: /u/cs448/data/hw4/. This directory contains parallel French-English text from the Canadian Parliament. Both sides (French and English) have been run through a tokenizer to split off punctuation from words.
You should floor all probabilities to a low number, say 1e-07, to avoid numerical problems as well as dead-ends in the EM training. Similarly, you may need to prune low-valued parameters in order to make memory usage and file sizes manageable.
Please turn in:
This is a big data set, and training is time- and memory-intensive.