The language of probability allows us to coherently and automatically account for uncertainty. This course will teach you how to build, fit, and do inference in probabilistic models. These models let us generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. This course will teach the basic building blocks of these models and the computational tools needed to use them. The class will have a major project component.
January 10: Introduction
January 12: Tutorial: Basic supervised learning and probability
January 17: Basic Probabilistic Generative and Discriminative models
Reading: Chapter 3 of David Mackay's textbook
Code examples:
January 19: Tutorial: Stochastic optimization
January 24: Directed Graphical Models
January 26: Tutorial: Automatic differentiation Autodiff demo slides Implementation slides
January 31: Undirected Graphical Models
February 2: Tutorial: Markov Random Fields
February 7: Exact Inference
February 9: Tutorial: Junction-tree algorithm notes slides
February 10: Assignment 1 due, submitted through Markus.
February 14: Variational Inference
February 16: Midterm exam
Things to know for midterm:
February 18 to 26: Reading week, no classes
February 28: Sampling and Monte Carlo methods
March 1: Tutorial: Gradient-based MCMC
Assignment 2 due March 12, submitted through Markus
Some Python and Numpy resources, from Roger Grosse's neural networks course:
March 7: Sequential data and time-series models
March 9: Tutorial: REINFORCE and differentiating through discrete variables
March 13: Last day to drop course.
March 14: Stochastic Variational Inference
March 14: Tutorial: Practicalities of SVI
March 21: Variational Autoencoders
March 29: Gaussian processes
March 31: Tutorial: Bayesian Optimization
April 4: Assignment 3 due April 4th, submitted through Markus
April 4: Generative Adversarial Networks
April 13: Project due, submitted through Markus