There is no required text, but the following are useful references in addition to the reading material assigned for each class:
On | we will cover | which means that after class you will understand | if before class you have read |
---|---|---|---|
1/19 | Probability Theory | independence, bayes rule | wasserman ch 1, 2, 3 |
1/24 | Information Theory | entropy, kl-distance, coding | mackay ch 2 |
1/26 | Probabilistic Inference | priors: bayesian reasoning, MAP | heckerman |
1/31 | Probabilistic Inference | priors on continuous variables | mackay ch 24 |
2/2 | Minimum Description Length | decision trees | mitchell |
2/7 | Probabilistic Inference | polytree | mackay ch 26 |
2/9 | Expectation Maximization | latent variable clustering | bilmes §1-3 |
2/14 | Independent Component Analysis | source separation | mackay ch 34 |
2/16 | Learning Theory | probably approximately correct | kearns&vazirani ch 1 |
2/21 | Learning Theory | VC dimension | kearns&vazirani ch 2, 3 |
2/23 | Eigenvectors | least squares, PCA | bishop 310-314, appendix E |
2/28 | Nonlinear Dimensionality Reduction | isomap, locally linear embedding | roweis; tenenbaum |
3/2 | Optimization | conjugate gradient | bishop 274-282 |
3/7 | Optimization | Gibbs Sampling, MCMC | mackay ch 29 |
3/9 | Review | ||
3/21 | Midterm | ||
3/23 | Midterm Solutions | ||
3/28 | MCMC, Gibbs | (continued from before midterm) | |
3/30 | Perceptron, Backpropagation | the chain rule | mackay ch 38, 39; bishop 140-148 |
4/4 | Support Vectors | the wolfe dual | burges §3-4 |
4/6 | Support Vectors | the kernel trick | |
4/11 | Hidden Markov Models | forward-backward | bilmes §4 |
4/13 | Reinforcement Learning | q-learning | ballard ch 11 |
4/18 | Reinforcement Learning | partial observability | ballard ch 11 |
4/20 | Games | nash equilibrium | morris 115-131 |
4/25 | Games | learning to co-operate | |
4/27 | Something Fun | ||
5/2 | Review | come to class with questions! |