Overview

In the next few decades we are going to witness millions of people, from various backgrounds and levels of technical expertise, needing to effectively interact with robotic technologies on a daily basis. As such, people will need to modify the behavior of their robots without explicitly writing code, but by providing only a small number of kinesthetic or visual demonstrations. At the same time, robots should try to infer and predict the human's intentions and internal objectives from past interactions, in order to provide assistance before it is explicitly asked. This graduate-level seminar course will examine some of the most important papers in imitation learning for robot control, placing more emphasis on developments in the last 10 years. Its purpose is to familiarize students with the frontiers of this research area, to help them identify open problems, and to enable them to make a novel contribution.

Prerequisites

You need to be comfortable with: introductory machine learning concepts (such as from CSC411/ECE521 or equivalent), linear algebra, basic multivariable calculus, intro to probability. You also need to have strong programming skills in Python. Note: if you don't meet all the prerequisites above please contact the instructor by email. Optional, but recommended: experience with neural networks, such as from CSC321, introductory-level familiarity with reinforcement learning and control.

Announcements

Dec 10, 2018: The course is now available for registration on ROSI/ACORN. We will be using the discussion board and the announcements section on Quercus.

Teaching Staff

Instructor
Florian Shkurti
x@cs.toronto.edu, x=florian
Office Hours: Tue 4-6pm, PT283E

Time and Location

When: Fridays, 1-3pm
Lecture room: AB 107, St. George Campus

Grading and Important Dates

  • Presentations (20%): Each student enrolled in the class will present at least one paper and they will be graded based on the clarity of the presentation, the depth of their understanding of the material, and how well they address questions from the audience. Students who are scheduled to present on a given week will give a practice presentation to the instructor on Monday that week at 5:30-7pm at PT283E, to ensure presentations of high quality.
  • Assignment (20%): The assignment is now posted here
  • Project Proposal (10%): Due Feb 11 at 6pm. Students can take on projects in groups of 2-3 people. Tips for a good project proposal can be found here. Proposals should not be based only on papers covered in class by Feb 11th. Students are encouraged to look further ahead in the schedule and to start planning their project definition well ahead of this deadline. Students who need help choosing or crystallizing a project idea should email the instructor. Project proposals that involve the use of real robot hardware should clearly indicate so, to allow sufficient time for the instructor to arrange this.
  • Midterm Progress Report (10%): Due Mar 1 at 6pm. Tips and expectations for a good midterm progress report are here.
  • Project Presentation (10%): Due Mar 29 or Apr 5. This will be a short presentation, approximately 5-10 minutes, depending on the number of groups.
  • Project Report and Code (30%): Due Apr 10. Tips and expectations for a good final project report can be found here.

Course Description

This course will broadly cover the following areas:

  • Imitating the policies of demonstrators (people, expensive algorithms, optimal controllers)
  • Connections between imitation learning, optimal control, and reinforcement learning
  • Learning the cost functions that best explain a set of demonstrations
  • Shared autonomy between humans and robots for real-time control

Schedule

Lecture Date Topics Presenters Slides
1 Jan 11 Introduction
Motivation, logistics, rough description of the topics to be covered.

Imitation vs. Robust Behavioral Cloning
ALVINN: An autonomous land vehicle in a neural network
Visual path following on a manifold in unstructured three-dimensional terrain
End-to-end learning for self-driving cars
A machine learning approach to visual perception of forest trails for mobile robots
DAgger: A reduction of imitation learning and structured prediction to no-regret online learning
Learning monocular reactive UAV control in cluttered natural environments

Required Background Reading
An invitation to imitation

Optional Reading
A survey of robot learning from demonstration
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
Florian Quiz 0
Syllabus
Slides
2 Jan 18 Intro to Optimal Control and Model-Based Reinforcement Learning
Linear Quadratic Regulator and some examples
Iterative Linear Quadratic Regulator
Model Predictive Control

Required Background Reading
Ben Recht: An outsider's tour of RL (watch his ICML'18 tutorial, too)

Optional Reading
PILCO: Probabilistic inference for learning control
Deep reinforcement learning in a handful of trials using probabilistic dynamics models
Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids
End-to-end differentiable physics for learning and control
Synthesizing neural network controllers with probabilistic model based reinforcement learning
A survey on policy search algorithms for learning robot controllers in a handful of trials
Reinforcement learning in robotics: a survey
DeepMPC: Learning deep latent features for model predictive control
Learning latent dynamics for planning from pixels
Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees
Florian Slides
3 Jan 25 Query-Efficient Policy Imitation via Novel State Detection
Maximum mean discrepancy imitation learning
DropoutDAgger: A Bayesian approach to safe imitation learning
SHIV: Reducing supervisor burden in DAgger using support vectors
Query-efficient imitation learning for end-to-end autonomous driving

Required Background Reading
Dropout as a Bayesian approximation: representing model uncertainty in deep learning

Optional Reading
Dropout: A simple way to prevent neural networks from overfitting
What my deep model doesn't know
Weight uncertainty in neural networks
Ruthrash
Chris
Rohan
Huan
Slides
4 Feb 1 Imitation Learning Combined with Reinforcement Learning, Control, and Planning #1
More detailed discussion of Dropout and epistemic vs. aleatoric uncertainty (continued from last week)
AggreVaTe: Reinforcement and imitation learning via interactive no-regret learning
Agile off-road autonomous driving using end-to-end deep imitation learning
End-to-end driving via conditional imitation learning
Deep Q-learning from demonstrations

Optional Reading: Imitation from Cost-to-Go Queries
Deeply AggreVaTeD: Differentiable imitation learning for sequential prediction
Convergence of value aggregation for imitation learning
Truncated Horizon Policy Search: Combining reinforcement learning & imitation learning
Fast policy learning through imitation and reinforcement
David
Brenna
Bryan
Wei
Renato
Paul
Slides
5 Feb 8 Imitation as Program Induction and Modular Decomposition of Demonstrations
Neural Task Programming: Learning to generalize across hierarchical tasks
TACO: Learning task decomposition via temporal alignment for control
Learning movement primitive libraries through probabilistic segmentation
Bayesian inference of temporal task specifications from demonstrations
Neural programmer-interpreters

Required Background Reading
The motion grammar: analysis of a linguistic method for robot control

Optional Reading
Action understanding as inverse planning
Incremental learning of subtasks from unsegmented demonstration
Inducing probabilistic context-free grammars for the sequencing of movement primitives
Neural Task Graphs: Generalizing to unseen tasks from a single video demonstration
Neural program synthesis from diverse demonstration videos
Automata guided reinforcement learning with demonstrations
A syntactic approach to robot imitation learning using probabilistic activity grammars
Robot learning from demonstration by constructing skill trees
Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning
Learning to sequence movement primitives from demonstrations
Zihang
Ilan
Angran
Zeqi
Slides
6 Feb 15 Inverse Reinforcement Learning #1
Maximum entropy inverse reinforcement learning
Active preference-based learning of reward functions
Large-scale cost function learning for path planning using deep inverse reinforcement learning
Direct loss minimization inverse optimal control

Optional Reading: Applications of IRL
Socially compliant mobile robot navigation via inverse reinforcement learning
Model-based probabilistic pursuit via inverse reinforcement learning
First-person activity forecasting with online inverse reinforcement learning
Learning strategies in table tennis using inverse reinforcement learning
Planning-based prediction for pedestrians
Activity forecasting
Sergio
Jacky
Sean
Siva
Slides
7 Feb 22 Shared Autonomy for Robot Control with Human in-the-Loop
Shared autonomy via deep reinforcement learning
Interactive autonomous driving through adaptation from participation
Shared autonomy via hindsight optimization
Learning models for shared control of human-machine systems with unknown dynamics
RelaxedIK: Real-time synthesis of accurate and feasible robot arm motion

Optional Reading
Designing robot learners that ask good questions
Blending human and robot inputs for sliding scale autonomy
Inferring and assisting with constraints in shared autonomy
Collaborative control for a robotic wheelchair: evaluation of performance, attention, and workload
Director: A user interface designed for robot operation with shared autonomy
Andrei
Nnorom
Bin
Tingwu
Slides
8 Mar 1 Adversarial Imitation Learning
GAIL: Generative adversarial imitation learning
Model-based adversarial imitation learning
InfoGAIL: interpretable imitation learning from visual demonstrations
Model-free imitation learning with policy optimization
Yeming
Yuwen
Yin-Hung
Jun
Slides
9 Mar 8 Imitation Learning Combined with Reinforcement Learning, Control, and Planning #2
Learning neural network policies with guided policy search under unknown dynamics
PLATO: Policy learning using adaptive trajectory optimization
Learning complex dexterous manipulation with deep reinforcement learning and demonstrations
Using probabilistic movement primitives in robotics

Optional Reading
Model-based imitation learning by probabilistic trajectory matching
DeepMimic: Example-guided deep reinforcement learning of physics-based character skills
Combining self-supervised learning and imitation for vision-based rope manipulation
Reinforcement learning from imperfect demonstrations
(Batch) reinforcement learning for robot soccer

Optional Reading: Imitation Can Improve Exploration
Overcoming exploration in reinforcement learning with demonstrations
Learning to gather information via imitation
Exploration from demonstration for interactive reinforcement learning

Yuwei
Jienan
Jason
Slides
10 Mar 15 Inverse Reinforcement Learning #2
Guided Cost Learning: Deep inverse optimal control via policy optimization
Inverse KKT: Learning cost functions of manipulation tasks from demonstrations
Bayesian inverse reinforcement learning
Maximum margin planning

Optional Reading
Inverse reward design
Nonlinear inverse reinforcement learning with gaussian processes
Compatible reward inverse reinforcement learning
Learning the preferences of ignorant, inconsistent agents
Imputing a convex objective function
Learning robust rewards with adversarial inverse reinforcement learning
Florian Slides
11 Mar 22 Rewards & Value Alignment
Learning the reward function for a misspecified model
Policy invariance under reward transformations: theory and applications to reward shaping
Scalable agent alignment via reward modeling: a research direction
Concrete problems in AI safety
Cooperative inverse reinforcement learning
Florian
12 Mar 29 Project Presentations
13 Apr 5 Project Presentations

Recommended, but optional, books

Recommended simulators

You are encouraged to use the simplest possible simulator to accomplish the task you are interested in. In most cases this means Mujoco, but feel free to build your own.
For all the starred environments below, please be aware of the 1-machine/student licensing restriction for the Mujoco physics engine:

Resources for planning, control, and RL

Resources for ML

Recommended courses