Generalizable Autonomy

My research vision is to enable robotic systems that: learn hierarchical control tasks by watching humans, seamlessly interact and collaborate with humans, and learn to improve performance and acquire new skills through self-practice. And my approach to these challenges develops algorithmic methods to enable efficient robot learning for long-term sequential tasks through Generalizable Autonomy.

I bring together expertise from areas of Reinforcement Learning, Optimal Control and Computer Vision. The principal focus of my research is to understand representations and algorithms to enable the efficiency and generality of learning for interaction in autonomous agents. My research combines model-based control with data-driven policy learning under unstructured perceptual inputs. And my prior work also reflects on broad applications of my methods, ranging from personal to medical robotics. I am eager to continue researching fundamental questions in general-purpose intelligence for interactive robotic agents.

My work can broadly be divided into topics as follows:

Generalizable Imitation

Realistic robotic solutions, in both personal or industrial applications, not only need skills but also need structured tasks with interaction planning over prolonged horizons. Consequently, these skills are ultimately only relevant if their composition achieves a higher-order objective. I work onimitation guided algorithms to learn policies in sequentially & hierarchically structure tasks.

RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning
RoboTurk is a system to scale imiation learning by the rapid crowdsourcing of high-quality demonstrations. This allows the creation of large datasets for manipulation tasks that we show improves the quality of imitation learning policies.
[Paper][Project Webpage] [Talk Video]
Neural Task Graphs
Neural Task Graph (NTG) Networks use task graph as the intermediate representation to modularize the representations of both the video demonstration and the derived policy. This formulation achieves strong inter-task generalization on planning tasks.
[arXiv 1807.03480]
Neural Task Programming
We present a method that learns to generalize across hierarchical tasks with a single example. It bridges the idea of few-shot learning from demonstration and neural program induction.
[ICRA18] [Video] [ArXiv 1710.01813]
Finding “It”: Weakly-Supervised Reference-Aware Visual Grounding in Instructional Video
In this work, we tackle visual grounding in instructional videos, where only the aligned transcriptions are available. We introduce the visually grounded action graph, a structured representation capturing the latent dependency between grounding and references in video.
[PDF] [Talk Video], CVPR, 2018 (Oral)
Sequential Windowed Inverse Reinforcement Learning
This work extends the idea of task structure learning to policy learning. This work evaluates policy learning on both simulated and physical benchmark tasks.
[WAFR 2016] [IJRR 2018] [Talk Video]
Transition State Clustering
We proposed an unsupervised algorithm for recovering task structure from demonstration data and autonomously perform semantic segmentation. It works with both kinematics and video data using pre-trained CNNs.
[ISRR 2015] [IJRR 2017 ] [ICRA 2016] [Tutorial-Video]

Generalizable Skill Learning

A skill should be reusable across tasks and objects to avoid constant relearning. It is not enough to learn a door-opening skill for one particular door, and then have to re-learn that skill for a new door, or even to open a fridge. Consequently, generalization across task families is an essential aspect of effective robot learning.

Self-Supervised Learning of Multimodal Representations
Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning.
[arXiv 1810.10191]. [Project Webpage] Under review at ICRA 2019
Learning Task-Oriented Grasping for Tool Manipulation
Tool manipulation is vital for facilitating robots to complete challenging task goals. We proposed a learning-based model for two-stage tool use. It jointly optimizes grasping robustness and suitability for the manipulation to improve task success with visual input at test time.
[arXiv 1806.09266] [Project Webpage] [Talk Video]
Adaptive Policy Transfer for Stochastic Dynamical Systems
We introduce the AdaPT algorithm that achieves provably safe and robust, dynamically-feasible zero-shot transfer of RL-policies to new domains with dynamics error. ADAPT combines the strengths of offline policy learning in a black-box source simulator with online tube-based MPC to attenuate bounded model mismatch between the source and target dynamics.
[ISRR17] [arXiv 1707.04674]
Robust Policy Learning and Transfer
We investigate direct sim2real policy transfer for deformable pattern cutting. We also develop a method to leverage adversarial perturbations in policy gradient method for robustness to environment perturbations at test time.
[ICRA17] [Cutting-Video] [IROS17] [ARPL-Video]
3D Reconstruction from Images
We present two different methods of shape reconstruction from images: one uses weakly supervised generative models and the other uses a deformation field prediction. The deformation method has also been used for grasp transfer in novel objects.
[3DV 17:arXiv] [DeformNet, WACV18:arXiv] [CoRL17:Grasping]

Skill Learning in Surgical Subtasks

Robot Assisted Minimally Invasive Surgery (RMIS) was used in manual teleoperation mode in over 570,000 procedures worldwide in 2014 with 3000 Da Vinci systems. However, RMIS procedures are tedious and depend highly on surgeon skill. Autonomy of surgical subtasks has the potential to assist surgeons, reduce fatigue, and enhance manual telesurgery. Moreover, the growing corpus of surgical data can enable data-driven learning for automation. I research learning from expert demonstrations in surgery with unique challenges such as specular workspace, constrained dexterity, and highly noisy datasets.

Tumor Localization using Automated Palpation
We propose a Gaussian Process based Adaptive Sampling method that improves sample complexity level set discovery in tumor localization.
[CASE16] [Project Page]
Autonomous Multi-Throw Suturing
We present an optimization framework and a novel mechanical needle guide design to perform supervised automation of multi-throw suturing.
[ICRA16] [Suturing-Video]
Autonomous Tumor Localization & Resection
We present two designs for surgical automation: a low-cost end-effector mount and a fluid injection system. We automate a 4-step tumor resection procedure to locate and debride a subcutaneous tumor.
[CASE16] [Video] Best Video Award
Palpation Probe
Disposable Sensors for Minimally Invasive Surgery
We proposed a Disposable Haptic Palpation Probe for Locating Subcutaneous Blood Vessels in Robot-Assisted Minimally Invasive Surgery.
[CASE15] Best Poster/Demo Award.
Learning by Observation for Surgical Subtasks
We proposed a Learning by Observation algorithm for surgical subtasks demosttrated with multilateral Cutting of 3D Viscoelastic and 2D Orthotropic Tissue Phantoms.
[ICRA15] [Video] [Short Talk]
Best Medical Robotics Paper Award Finalist

Radiation Therapy for Cancer: Planning and Delivery

High Dose Rate Brachytherapy (HDR-BT) is an internal radiation therapy and is used for over 500,000 cancer patients annually in the US. It is prevalent for treatment in many body sites such as mouth, breast and prostate. It involves radioactive sources placed temporarily proximal to or within tumors. Current methods for intracavitary and interstitial HDR-BT use generic templates which result in inadequate dose coverage and healthy organ puncture, respectively.
We present novel patient specific 3D-printed implants and needle guides for respective modes; we also evaluate robot-assisted needle implants for interstitial HDR-BT.

3D Printed Implants for Intracavitary Brachytherapy
We propose a new approach that builds on progress in 3D printing and steerable needle motion planning to create customized implants containing customized curvature-constrained internal channels that fit securely, minimize air gaps, and precisely guide radioactive sources through printed channels.
[CASE13] [Short Talk] [Slides]
Material Evaluation of 3D Printed GYN Implants
The study evaluates the radiation attenuation properties of PC-ISO, a commercially available, biocompatible, sterilizable 3D printing material, and its suitability for customized, single-use gynecologic (GYN) brachytherapy applicators that have the potential for accurate guiding of seeds through linear and curved internal channels.
Robot-Guided Needle Insertion for HDR-BT
We leverage human-centered automation to reduce side effects from HDR-BT in prostate cancer by efficiently delivering radiation to the prostate while minimizing trauma to sensitive structures such as the penile bulb. We modify the Acubot-RND system to guide needles into desired skew-line arrangements algorithmically calculated with needle planning and inverse dose planning algorithms.
[CASE12] [T-ASE13] [Video] [CASE-Talk]
Best Application Paper Award.
Reachability Analysis for Needle Planning in HDR-BT
We propose a new approach that builds on recent results in 3D printing and steerable needle motion planning to create customized implants containing customized curvature-constrained internal channels that fit securely, minimize air gaps, and precisely guide radioactive sources through printed channels.
3D Printed Guides for Prostate Brachytherapy
We propose the use of patient specific custom needle guides for needle configuration implant in Prostate HDR-BT. This work builds upon the robot-guided needle implants, and attempts to evaluate a low-cost yet effective method for achieving clinical objectives.
[PDF-soon!] [Brachytherapy14]

Locations of visitors to this page