Imitation Learning for Robotics, Winter 2019, CSC2621
Overview
In the next few decades we are going to witness millions of people, from various backgrounds and levels of technical expertise, needing to effectively interact with robotic technologies on a daily basis. As such, people will need to modify the behavior of their robots without explicitly writing code, but by providing only a small number of kinesthetic or visual demonstrations. At the same time, robots should try to infer and predict the human's intentions and internal objectives from past interactions, in order to provide assistance before it is explicitly asked. This graduate-level seminar course will examine some of the most important papers in imitation learning for robot control, placing more emphasis on developments in the last 10 years. Its purpose is to familiarize students with the frontiers of this research area, to help them identify open problems, and to enable them to make a novel contribution.Prerequisites
You need to be comfortable with: introductory machine learning concepts (such as from CSC411/ECE521 or equivalent), linear algebra, basic multivariable calculus, intro to probability. You also need to have strong programming skills in Python. Note: if you don't meet all the prerequisites above please contact the instructor by email. Optional, but recommended: experience with neural networks, such as from CSC321, introductory-level familiarity with reinforcement learning and control.Announcements
Dec 10, 2018: The course is now available for registration on ROSI/ACORN. We will be using the discussion board and the announcements section on Quercus.
Teaching Staff
Time and Location
When: Fridays, 1-3pm
Lecture room: AB 107, St. George Campus
Grading and Important Dates
- Presentations (20%): Each student enrolled in the class will present at least one paper and they will be graded based on the clarity of the presentation, the depth of their understanding of the material, and how well they address questions from the audience. Students who are scheduled to present on a given week will give a practice presentation to the instructor on Monday that week at 5:30-7pm at PT283E, to ensure presentations of high quality.
- Assignment (20%): The assignment is now posted here
- Project Proposal (10%): Due Feb 11 at 6pm. Students can take on projects in groups of 2-3 people. Tips for a good project proposal can be found here. Proposals should not be based only on papers covered in class by Feb 11th. Students are encouraged to look further ahead in the schedule and to start planning their project definition well ahead of this deadline. Students who need help choosing or crystallizing a project idea should email the instructor. Project proposals that involve the use of real robot hardware should clearly indicate so, to allow sufficient time for the instructor to arrange this.
- Midterm Progress Report (10%): Due Mar 1 at 6pm. Tips and expectations for a good midterm progress report are here.
- Project Presentation (10%): Due Mar 29 or Apr 5. This will be a short presentation, approximately 5-10 minutes, depending on the number of groups.
- Project Report and Code (30%): Due Apr 10. Tips and expectations for a good final project report can be found here.
Course Description
This course will broadly cover the following areas:- Imitating the policies of demonstrators (people, expensive algorithms, optimal controllers)
- Connections between imitation learning, optimal control, and reinforcement learning
- Learning the cost functions that best explain a set of demonstrations
- Shared autonomy between humans and robots for real-time control
Schedule
Recommended, but optional, books
- Robot programming by demonstration, by Aude Billard, Sylvain Calinon, Rudiger Dillmann, Stefan Schaal
- Robot learning from human teachers, by Sonia Chernova, Andrea Thomaz
- An algorithmic perspective on imitation learning, by Takayuki Osa, Joni Pajarinen, Gerhard Neumann, Andrew Bagnell, Pieter Abbeel, Jan Peters
Recommended simulators
You are encouraged to use the simplest possible simulator to accomplish the task you are interested in. In most cases this means Mujoco, but feel free to build your own.For all the starred environments below, please be aware of the 1-machine/student licensing restriction for the Mujoco physics engine:
- OpenAI Gym (Robotics*, Mujoco*, Box2D, Classic Control)
- DeepMind control suite*
- Surreal Robosuite (manipulation*)
- Klampt (manipulation and locomotion tasks, contact modeling)
- DART (manipulation and locomotion tasks, contact modeling)
- Udacity self-driving car simulator (based on Unity, needs a GPU)
- CARLA self-driving car simulator (based on Unreal Engine 4, needs a GPU)
- Holodeck (based on Unreal Engine 4, needs a GPU)
- AirSim (flying vehicles and cars, based on Unreal Engine 4, needs a GPU)
- TORCS self-driving car simulator
- V-REP (robot arms, humanoids, hexapods)
- DeepMind Lab (navigation in mazes)
- Gibson environment (navigation, locomotion in indoor environments, needs a GPU)
- HoME: Household multimodal environment (vision, audio, language grounding, physics, and multiple agents in indoor environments)
Resources for planning, control, and RL
- Open Motion Planning Library
- Control Toolbox from ETHZ (C++ only at the moment, but includes automatic differentiation)
- Trajectory optimization
- Black-DROPS Policy Search (C++ only at the moment)
- Guided Policy Search
- OpenAI Baselines
Resources for ML
- PyTorch
- Tensorflow
- GPyTorch (for gaussian processes)
Recommended courses
- Robot Learning Seminar by Abdeslam Boularias
- Deep RL course by Sergey Levine, John Schulman, Chelsea Finn
- Deep RL course by Jimmy Ba
- Robot Learning and Sensorimotor Control course by Sethu Vijayakumar
- Algorithmic HRI course by Anca Dragan
- Related sections from Russ Tedrake's underactuated robotics course