Source: CycleGAN. You will implement this model for Assignment 4.
Machine learning is a powerful set of techniques that allow computers to learn from data rather than having a human expert program a behavior by hand. Neural networks are a class of machine learning algorithm originally inspired by the brain, but which have recently have seen a lot of success at practical applications. They're at the heart of production systems at companies like Google and Facebook for face recognition, speech-to-text, and language understanding.
This course gives an overview of both the foundational ideas and the recent advances in neural net algorithms. Roughly the first 2/3 of the course focuses on supervised learning -- training the network to produce a specified behavior when one has lots of labeled examples of that behavior. The last 1/3 focuses on unsupervised learning and reinforcement learning.
All course-related announcements will be sent to the class mailing list, csc321h1s [at] teach.cs.toronto.edu.
We will use Piazza for the course forum. Details to follow.
If you want to contact the course staff privately, please e-mail csc321staff [at] cs.toronto.edu (for the TAs and the instructor) or rgrosse [at] cs.toronto.edu (for only the instructor).
All assignment deadlines are at 11:59pm on the date listed. Please see the course information handout for detailed policies (marking, lateness, etc.).
Afternoon: 1/4, 1-2pm; Night: 1/9, 6-7pm
What are machine learning and neural networks, and what would you use them for? Supervised, unsupervised, and reinforcement learning. How this course is organized.
Afternoon: 1/9, 1-2pm; Night: 1/9, 7-8pm
Linear regression, a supervised learning task where you want to predict a scalar valued target. Formulating it as an optimization problem, and solving either directly or with gradient descent. Vectorization. Feature maps and polynomial regression. Generalization: overfitting, underfitting, and validation.
Afternoon: 1/11, 1-2pm; Night: 1/16, 6-7pm
Binary linear classification. Visualizing linear classifiers. The perceptron algorithm. Limits of linear classifiers.
Afternoon: 1/16, 1-2pm; Night: 1/16, 7-8pm
Comparison of loss functions for binary classification. Cross-entropy loss, logistic activation function, and logistic regression. Hinge loss. Multiway classification. Convex loss functions. Gradient checking. (Note: this is really a lecture-and-a-half, and will run into what's scheduled as Lecture 5.)
Afternoon: 1/18, 1-2pm; Night: 1/23, 6-7pm
Multilayer perceptrons. Comparison of activation functions. Viewing deep neural nets as function composition and as feature learning. Limitations of linear networks and universality of nonlinear networks.
Suggested reading: Deep Learning Book, Sections 6.1-6.4
Afternoon: 1/23, 1-2pm; Night: 1/23, 7-8pm
The backpropagation algorithm, a method for computing gradients which we use throughout the course.
Afternoon: 1/25, 1-2pm; Night: 1/30, 6-7pm
Language modeling, n-gram models (a localist representation), neural language models (a distributed representation), and skip-grams (another distributed representation).
Afternoon: 1/30, 1-2pm; Night: 1/30, 7-8pm
How to use the gradients computed by backprop. Features of optimization landscapes: local optima, saddle points, plateaux, ravines. Stochastic gradient descent and momentum.
Suggested reading: Deep Learning Book, Chapter 8
Afternoon: 2/1, 1-2pm; Night: 2/6, 6-7pm
Bias/variance decomposition, data augmentation, limiting capacity, early stopping, weight decay, ensembles, stochastic regularization, hyperparameter tuning.
Suggested reading: Deep Learning Book, Chapter 7
Lecture 10: Automatic Differentiation [Slides]
Afternoon: 2/6, 1-2pm; Night: 2/6, 7-8pm
Afternoon: 2/8, 1-2pm; Night: 2/13, 6-7pm
Convolution operation. Convolution layers and pooling layers. Equivariance and invariance. Backprop rules for conv nets.
Afternoon: 2/13, 1-2pm; Night: 2/13, 7-8pm
Conv net architectures applied to handwritten digit and object classification. Measuring the size of a conv net.
Lecture 13: Catch-Up
Afternoon: 2/15, 1-2pm; Night: 2/27, 6-7pm
There is no Lecture 13 because we're superstitious. Also, we've fallen roughly a full lecture behind schedule, so this will sync up the schedule with what's actually covered.
Afternoon: 2/27, 1-2pm; Night: 2/27, 7-8pm
Interesting things you can do with gradient descent on the inputs: conv net visualizations, adversarial inputs, Deep Dream.
Afternoon: 3/1, 1-2pm; Night: 3/6, 7:30-8:30pm
Recurrent neural nets. Backprop through time. Applying RNNs to language modeling and machine translation.
Afternoon: 3/8, 1-2pm; Night: 3/13, 6-7pm
Why RNN gradients explode and vanish, both in terms of the mechanics of backprop, and conceptually in terms of the function the RNN computes. Ways to deal with it: gradient clipping, input reversal, LSTM.
Afternoon: 3/13, 1-2pm; Night: 3/13, 7-8pm
Deep Residual Networks. Attention-based models for machine translation and caption generation.
Afternoon: 3/15, 1-2pm; Night: 3/20, 6-7pm
Maximum likelihood estimation. Optional: basics of Bayesian parameter estimation and maximum a-posteriori estimation.
Afternoon: 3/20, 1-2pm; Night: 3/20, 7-8pm
Afternoon: 3/22, 1-2pm; Night: 3/27, 6-7pm
Lecture 21: Policy Gradient [Slides]
Afternoon: 3/27, 1-2pm; Night: 3/27, 7-8pm
Lecture 22: Q-Learning [Slides]
Afternoon: 3/29, 1-2pm; Night: 4/3, 6-7pm
Lecture 23: Go [Slides]
Afternoon: 4/3, 1-2pm; Night: 4/3, 7-8pm
Afternoon: 1/11, 2-3pm; Night: 1/9, 8-9pm
Afternoon: 1/18, 2-3pm; Night: 1/16, 8-9pm
Afternoon: 1/25, 2-3pm; Night: 1/23, 8-9pm
Tutorial 4: Autograd [IPython Notebook]
Afternoon: 2/1, 2-3pm; Night: 1/30, 8-9pm
Afternoon: 2/8, 2-3pm; Night: 2/6, 8-9pm
Afternoon: 2/15, 2-3pm; Night: 2/13, 8-9pm
Tutorial 7: Midterm Review
Afternoon: 3/1, 2-3pm; Night: 2/27, 8-9pm
This tutorial will effectively be extra office hours. But if there are recurring questions, or solutions to past exams you'd like to see gone over, we can discuss those as a class.
Afternoon: 3/15, 2-3pm; Night: 3/13, 8-9pm
Afternoon: 3/22, 2-3pm; Night: 3/20, 8-9pm
Afternoon: 3/29, 2-3pm; Night: 3/27, 8-9pm
The programming assignments will all be done in Python using the NumPy scientific computing library, but prior knowledge of Python is not required. Basic Python will be taught in a tutorial. We will be using Python 2, not Python 3, since this is the version more commonly used in machine learning.
You have several options for how to use Python:
Once Python is installed, there are two ways you can edit and run Python code:
Here are some recommended background readings on Python and NumPy.