Computer Science 446
Machine Learning
Spring 2021
|
|
Instructor:
Dan Gildea office hours M/W 1-2pm
TAs:
Prereqs: Probability, Linear Algebra, Vector Calculus.
Homeworks
Lecture notes
Required text: Christopher M. Bishop, Pattern Recognition and Machine Learning.
The following are useful references in addition
to the reading material assigned for each class:
- Stuart Russell and Peter Norvig, Artificial Intelligence, A Modern Approach.
- Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction.
- Larry Wasserman, All of Statistics, 2004.
Syllabus
On | we will cover | which means that after class you will understand | if before class you have read |
2/2 |
Regression and Classification |
constrained optimization, perceptron
| bishop 1.2, 1.4, app E |
2/4 |
Logistic Regression |
stochastic gradient descent
| bishop 4.3 |
2/9 |
Backpropagation |
DP for gradient descent
| bishop 5.1, 5.2, 5.3 |
2/11 |
Deep Learning |
drop-out
| Krizhevsky 2012 |
2/16 |
Support Vectors |
max-margin
| bishop 3.1, 4.1 |
2/18 |
Support Vectors |
strong duality
| bishop 7.1 |
2/23 |
Support Vectors |
the kernel trick
| bishop 6.1, 6.2 |
2/25 |
Hidden Markov Models |
forward-backward
| bishop 13.2 |
3/2 |
Graphical Models |
bayes ball
| |
3/4 |
Probabilistic Inference |
message passing
| bishop 8.4 |
3/9 |
Tree decomposition |
cyclic graphs
| koller and friedman |
3/11 |
Tree decomposition cont'd |
vertex elimination
| |
3/16 |
Review |
| |
3/18 |
Midterm |
| |
3/23 |
Midterm Solutions |
| |
3/25 |
Fairness in machine learning |
| Kleinberg, Corbett-Davies |
3/30 |
no class |
| |
4/1 |
Expectation Maximization |
L = Q + H + D
| bishop 9 |
4/6 |
Expectation Maximization |
mixture of gaussians
| bishop 9 |
4/8 |
EM for HMM |
minimum bayes risk
| |
4/13 |
Sampling |
Markov Chain Monte Carlo
| bishop 11.2 |
4/15 |
Metropolis Hastings |
detailed balance
| bishop 11.2 |
4/20 |
Gibbs sampling |
annealing
| bishop 11.3 |
4/22 |
Learning Theory |
PAC
| Kearns and Vazirani |
4/27 |
Gradient Descent |
SGD convergence
| Ruder 2016 |
4/29 |
GAN/VAE |
deep generative models
| Goodfellow et al. 2014, Kingma and Welling 2014 |
5/4 |
Reinforcement Learning |
q-learning
| sutton ch 3, 4.3, 4.4, 6.1, 6.5, 7.2, 11.1 |
5/6 |
Review |
come to class with questions!
| |
Grading
- Homeworks: 50%
- Final exam: 30%
- Midterm: 20%
gildea @ cs rochester edu
April 27, 2021
|