Wouldn’t it be great if computers could program themselves, with a few simple directives from us? Or would it?

The vision of automated program synthesis has been around for more than 50 years in various incarnations, and is receiving new life from recent advances in machine learning. Our inspiration come from these works and from John McCarthy’s hypothetical “Advice Taker” (1958) whose performance could improve over time as a result of receiving advice, rather than being reprogrammed.

These days many of my students and I are very interested in how humans can interact seamlessly with embodied/disembodied intelligences. One aspect of such an interaction is with respect to how “the program” of such an intelligence is specified, acquired, synthesized from examples, instructions, and advice. Such instructions or advice may take the form of a sketch of what to do, safety and liveness constraints, and/or user preferences. Such sketches, constraints and preferences may be specified compositionally in one or all of Linear Temporal Logic (LTL), programming languages (regular expressions) like Golog, automata, Hierarchical Task Networks, natural language or the like, or they may be acquired through learned interaction with a user or the environment. The kernel of this work originally emerged from our work on automated web service composition where we were using generic procedures together with user-specific customizing constraints and preferences to synthesize compositions that were customized to meet user and other stake holder objectives. Over the last decade we’ve developed a suite of highly optimized computational techniques that allow for plan synthesis in this milieu, and we are now extending it to program synthesis. Some of this vision was discussed in an Invited Talk at IJCAI 2016 entitled “Do as I say and as I do: the future of automated programming”.

The above description is a little dated, here are some topics that our Research Group has been working on lately:

See also the general publication page.