Csc2547 / Sta4273: Topics in Statistical Learning Theory

Murat A. Erdogdu, University of Toronto, Winter 2019


Time and location

Class meets on Thursday 2-4 pm @AB 107 (50 St. George St.)


Murat A. Erdogdu
Office hours: Monday 12pm-1pm, 100 St. George St. #5016b, Email: erdogdu at cs.toronto dot edu

When contacting the course staff, please use “Csc2547” as your email subject line.

Teaching Assistants



This class requires a good informal knowledge of probability theory, linear algebra, real analysis (at least Masters level). Homework 0 is a good way to check your background.


There is no required textbook for the course. The following materials can be helpful.


Your grade will be determined by three homework assignments (60%), and a final project (40%).


Assignments posted on the class website will be due in class on Thursday at the start of lecture. If you are traveling, you may email your solution to one of the course staff in advance of the deadline. Ten percent of the homework value will be deducted for each day a homework is late. Exceptions will be made for documented emergencies. No credit will be given for homework submitted after solutions have been posted.

After attempting the problems on an individual basis, you may discuss a homework assignment with up to two classmates. However, you must write your own code and write up your own solutions individually and explicitly name any collaborators at the top of the homework.

Here is a latex template for lectures, assignments, and project reports. Note that you don't have to turn in latexed solutions for the assignments, but it would be nice if you did.

Final Project

See the final project page.

Course Overview

  • How fast will your algorithm converge?

  • How much data do you need to get good prediction results?

  • What is the performance of your algorithm on test data?

In this course, we will try to answer questions like the above using rigorous math.

Course Topics (according to time and interest):

  • Gaussian mean estimation

  • Asymptotics

  • Concentration inequalities

  • Uniform convergence

  • Optimization in ML

  • Online learning

  • Kernel method