Undergraduate Artificial Intelligence Group

Winter 2014 Meetings

Tuesday May 6, 2014 (2:00pm SF1105):
Panel Discussion: Transcendence and the upcoming AI singularity (video)

Friday April 4, 2014 (4:30pm PT266):
UAIG Elections: help nurture a great AI community by unleashing your passion for AI!

Friday March 28, 2014 (4:30pm PT266):
UAIG AI&Games 5: "AI for real time game playing" by Steve Engels (slides)

Friday March 21, 2014 (4:30pm PT266):
UAIG AI&Games 3&4: "AI methods for Backgammon" by Harun Mustafa
followed by: "Domain Independent Game playing" by Daniel Kats (slides)

Friday March 14, 2014 (4:30pm PT266):
Meeting cancelled: due to unforseen circumstances our speaker cannot make it this Friday, hence we will be combining both talks at the next meeting on Friday March 21! So come next week for an exciting double feature on domain independent games and probabilistic game playing!

Friday March 7, 2014 (4:30pm PT266):
UAIG AI&Games 2: "Systems with General Intelligence: A New Perspective" by Michael Thielscher

Friday February 28, 2014 (4:30pm PT266):
UAIG AI&Games 1: Welcome to UAIG's new series of talks and events: AI&Games! We are starting off with "Turing Test for Game Bots" by Avraham Sherman

Friday January 24, 2014 (4:30pm PT266):
UAIG grad talk 5 (slides): [Computer Vision] "Local models for shape and motion" by Fernando Flores-Mangas

Friday February 7, 2014 (4:30pm PT266):
UAIG grad talk 4 (slides): [Computational Linguistics] "Interpreting Anaphoric Shell Nouns using Antecedents of Cataphoric Shell Nouns" by Varada Kolhatkar

Fall 2013 Meetings

Friday November 29, 2013 (4:30pm PT266):
UAIG grad talk 3 (slides): [Computational Biology] "Detecting Copy Number Variation in a Fetal Genome using Maternal Plasma Sequencing" by Ladislav Rampášek

Wednesday November 27, 2013 (4:30pm PT378):
UAIG grad talk 2 (slides): [Knowledge Representation] "Elicitation and Approximately Stable Matching with Partial Preferences" by Joanna Drummond

Friday November 22, 2013 (4:30pm PT266):
UAIG grad talk 1 (slides): [Machine Learning] "Convolutional Neural Nets for Computer Vision" by Nitish Srivastava

Friday November 1, 2013 (4:30pm PT266):
Computational Economics (slides): "Multi-Dimensional Single-Peakedness and its Approximations" by Alex Francois-Nienaber (slides by Xin Sui)

Friday October 18, 2013 (4:30pm PT266):
Reinforcement Learning (slides)

Wednesday October 2, 2013 (4:30pm PT266):
Introductory meeting, AI Ethics (slides)

Summer 2013 Meetings

Friday May 10, 2013 (PT266):
UAIG met a few times over the summer (slides)

Winter 2013 Meetings

Wednesday April 3, 2013 (5pm BA5256):
Our final meeting of the semester will be this Wednesday April 3. The main purpose of the meeting will be to coordinate things for next year. If you're interested in getting involved in UAIG, or want to know more about that that would entail, please come out to this meeting. As usual, we'll be meeting in BA5256 at 5pm.

Wednesday March 27, 2013 (5pm BA5256):
Guest speaker: Frank Rudzicz
Topic: Communicating with Machines: An Introduction to SPOClab
Abstract: In this talk I introduce SPOClab (Signal Processing and Oral Communication), which bridges Computer Science at the University of Toronto with the Toronto Rehabilitation Institute. The goal of our lab is to produce software that helps to overcome challenges of communication including speech and language disorders. This will be organized into two co-dependent streams of research. First, we will embed control-theoretic models of speech production into augmented ASR systems using various machine-learning techniques. Second, these systems will be deployed in software that can be used in practice; this involves adjacent disciplines such as human-computer interaction and general natural language processing to design and study application interfaces for disabled users.

Wednesday February 13, 2013 (5pm BA5256):
Guest speaker: Ilya Sutskever
Topic: Image classification with convolutional neural networks
Abstract: We describe an application of large convolutional neural networks to object recognition. Our network has 8 layers, 600 million connections, 60 million parameters, and 600,000 neurons, making it one of the largest neural networks ever trained. The network was trained to categorize images into 1000 lasses using the 1.2M training images of the ImageNet Large Scale Visual Recognition Challenge 2012 competition. The network was implemented on two GPUs and used a number of novel techniques to prevent overfitting. We entered a variant of this network to the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. Additionally, the network's visual representation (which has 4096 dimensions) outperformed 128 neurons from the IT area of a macaque's visual cortex at a certain recognition task that causes other computer vision systems to fail.
This is joint work with Alex Krizhevksy and Geoffrey Hinton.
Download the slides from the talk.

Wednesday February 6, 2013 (5pm BA5256):
"Introductory" meeting (a little late).

Wednesday January 30, 2013 (5pm BA5256):
Guest speaker: Jackie Cheung
Topic: Discovering Semantic Knowledge Using Distributional Information
Abstract: Mapping a sentence or some other linguistic unit to a representation of its meaning is required for many complex tasks in natural language processing. In natural language semantics, there have been two major approaches to modelling meaning. One approach uses symbolic, logical representations and their associated logical inference rules to represent and reason about the world. Another uses statistical, distributional information about the contexts in which a word or phrase appears in a large corpus of training text to model its meaning. I will show that distributional information can actually be used to discover the sort of semantic knowledge and structures used in the logical approach in two settings.

Wednesday January 23, 2013 (5pm BA5256):
Guest speaker: Abdel-rahman Mohamed
Topic: How do machines recognize speech?
Abstract: In this talk I will introduce the field of speech processing focusing on Automatic Speech Recognition (ASR). I will describe the basic blocks of a typical ASR system then I will describe our contributions at UofT to the state-of-the-art ASR system. The algorithms we developed at UofT are the best performing ones at Google, IBM, and Microsoft research labs and are currently used in Google’s Android 4.1.

2012 Meetings

Monday November 26, 2012 (5pm @ Top Sushi):
This will be our last official meeting of the semester. We'll take a break from the usual meeting format and meet at Top Sushi (just across the street on College) for dinner at the regular time, 5-6pm. We can discuss topics we covered this semester, themes (and projects!) for next semester, and just plain socialize.

Monday November 19, 2012 (5pm BA5256):
This week we'll step back and take a broader look at the field of AI as a whole. We'll discuss the variety of often divergent goals AI researchers have and the different motivations and assumptions underlying different approaches. Here is a diverse list of readings/videos related to these ideas. Investigate whatever you feel is interesting, but feel free to show up at the meeting even if you haven't looked at any of them. This discussion should be pitched at such a level that no specific prerequisite knowledge is required.
Recommended reading:
Recommended videos:

Monday November 12, 2012:
No meeting - fall break!

Monday November 5, 2012 (5pm BA5256):
Guest speaker: Charlie Tang
Topic: Deep Networks for Face Recognition
Abstract: Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representation. In this work, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.

Monday October 29, 2012 (5pm BA5256):
Sean will lead a discussion on deep belief nets

Recommended reading:
Optional reading: If you want to know more about contrastive divergence:On Contrastive Divergence Learning

Monday October 22, 2012 (5pm BA5256):
Guest speaker: Paul Grouchy
Topic: Evolutionary Algorithms and Artificial Intelligence
Natural evolution has produced the most advanced intelligence discovered to date: our own. One would then expect that computer simulations of evolution could produce artificial intelligences. A variety of evolution-based programming techniques will be presented. Some of these techniques will be examples of Evolutionary Algorithms (EAs) as a form of AI, while others will showcase the power of EAs to artificially evolve neural network based AIs.
Download the slides from the talk.

Monday October 16, 2012 (5pm BA5256):
Intro to neural networks
Recommended reading: From Neural Networks to Deep Learning: Zeroing in on the Human Brain
Recommended videos: The Next Generation of Neural Networks

Monday October 1, 2012 (5pm BA5256):
Introductory meeting

2011 Meetings

November 28, 2011:
Professor Sheila McIlraith from the Knowledge & Representation group will be presenting.

November 21, 2011:
No meeting. (award reception for NSERC recipients and others)

November 14, 2011:
Adam Golding on the computational modeling of preferences

October 31, 2011:
Meeting in PT266, 5-6 pm.
Topic: Knowledge & Representation
Outline: We have two talks scheduled, see below:

5:00 – 5:30
Title: Plan Dispatchability: A Survey
Author: Christian Muise
In this talk we present the simple temporal network formalism, itsextensions, and the applications / solutions that have been presented in the literature. A simple temporal network is a type of plan that describes the events that must be executed, and the temporal constraints that must be satisfied during execution. The focus will be primarily on showing the consistency of temporal networks, and controllability of temporal networks with uncertainty. We will also briefly cover some of the more esoteric extensions to simple temporal networks that involve resources, preferences, and choice of subplans.

5:30-6:00
Title: Solving QBF: CNF and alternatives
Author: Alexandra Goultiaeva
Quantified Boolean Formula (QBF) problem is a PSPACE-complete extension of satisfiability (SAT) problem that allows the formulas to have quantification. It can be used to naturally and efficiently represent problems with adversarial dynamics, such as conditional planning, as well as various problems in CAD and verification. The most widespread approach to solving QBF is having a search-based algorithm working on prenex Conjunctive Normal Form (CNF) representations. However, in the recent years it has been shown that often relaxing these constraints can be beneficial. This talk will outline the current approaches to solving QBF formulas, as well as techniques for non-CNF and for non-prenex reasoning.

October 17, 2011:
BACK IN ACTION – meetings on Mondays 5-6 pm in PT266

With the following preliminary schedule:
Oct. 24 – Misko Dzamba on computational biology
Oct. 31 – KR Presentations
Nov.7 – FALL BREAK (go read)
Nov.14 – Chris Maddison on recurrent neural nets
Nov.21 – Adam Golding on computational modeling of preferences

March 15, 2011:
Focus: knowledge and representation
Guest speakers: Eric Hsu and Alexandra Goultiaeva
topic: SAT solving

March 8, 2011:
Focus: computational linguistics
Guest speaker: Chris Parisien
Finding structure in the mire: Bayesian models of how children learn to use verbs
Children are fantastic data miners. In the first few years of their lives, they discover a vast amount of knowledge about their native language. This means learning not just the abstract representations that make up a language, but also learning how to generalize that knowledge to new situations — in other words, figuring out how language is productive. Given the noise and complexity in what kids hear, this is incredibly difficult, yet still, it seems effortless. In verb learning, a lot of this generalization appears to be driven by strong regularities between form and meaning. Seeing how a certain verb has been used, kids can make a decent guess about what it means. Knowing what a verb means can suggest how to use it.
In this talk, I present a series of hierarchical Bayesian models to explain how children can acquire and generalize abstract knowledge of verbs from the language they would naturally hear. Using a large, messy corpus of child-directed speech, these models can discover a broad range of abstractions governing verb argument structure, verb classes, and alternation patterns. By simulating experimental studies in child development, I show that these complex probabilistic abstractions are robust enough to capture key generalization behaviours of children and adults. Finally, I will discuss some promising ways that the insights gained from modelling child language can benefit the development of a valuable large-scale linguistic resource, namely VerbNet.

March 1, 2011:
Focus: computational biology
Guest speaker: Abe Heifets
LigAlign: Flexible ligand-based active site alignment and analysis
Ligand-based active site alignment is a widely adopted technique for the structural analysis of protein–ligand complexes. However, existing tools for ligand alignment treat the ligands as rigid objects even though most biological ligands are flexible. We present LigAlign, an automated system for flexible ligand alignment and analysis. When performing rigid alignments, LigAlign produces results consistent with manually annotated structural motifs. In performing flexible alignments, LigAlign automatically produces biochemically reasonable ligand fragmentations and subsequently identifies conserved structural motifs that are not detected by rigid alignment.
(see readings for the full article)

February 22, 2011:
Reading week. No meeting.

February 15, 2011:
Focus: computational cognitive science
In preparation for the Distinguished Lecture Series happening earlier the same day, members are asked to choose and read a paper by Josh Tenenbaum (see Readings section).
The meeting will consist of a brief overview of the talk, as well as a discussion of the ideas and concepts related to Josh Tenenbaum’s research.

February 8, 2011:
Focus: cognitive science
Adam Golding will lead the discussion.
Everyone is asked to choose an article from one of the encyclopediae listed under readings.
The group discussion will target the the heterogeneity/ecclectisim/pluralism inherent in cogsci.

February 1, 2011:
Focus: computer vision
Title: The Need for Mid-Level Shape Priors in Object Categorization
Invited speaker: Pablo Sala
Object categorization plays an important role in computer vision and image retrieval. Although a trivial task for humans, this is an extremely challenging computational problem, which remains largely unsolved.
Without knowing what they are looking at, humans have the ability to organize ambiguous visual stimuli into coherent groups. This important perception mechanism involved in the early stages of the object categorization process is called “perceptual grouping”. Alghouth research in perceptual grouping was very active in the object recognition community until the mid-90s, in recent years most categorization researchers have moved to formulations of the recognition problem as object detection. However, recognition as detection does not scale to large object databases, where an informative shape index requires domain-independent (not object-specific) shape priors to drive the processes of perceptual grouping and perceptual abstraction.
In this talk, I’ll present research on the problem of generic object recognition. Rather than assuming an object-level shape prior, I follow the classic formulation of the recognition problem and assume a vocabulary of compositional parts from which objects can be constructed.
I’ll show an approach to group image contours into abstract 2-D parts and discuss various methods to select from among the set of generated 2-D parts, a subset of parts that provides the best interpretation of the image. Finally, I’ll explain how the selected 2-D parts can be grouped into 3-D volumes abstracting the 3-D shapes in the scene.

January 25, 2011:
Focus: computational linguistics and NLP
(same place and time as last week)
Reading posted under the reading section and in dropbox.
To continue our speech processing theme, we will also watch "words in puddles of sound"

January 18, 2011:
Focus: computational linguistics and NLP
We are meeting 4-5 pm in BA5256 (this will be our regular room).
Please read the paper posted under the ‘readings section’.
Michelle will lead the discussion on authorship attribution, as well as provide us with an introduction to computational linguistics.

January 11, 2011:
Focus: computational linguistics – the problems

2010 Meetings

December:
Break: do what you like.

November 29, 2010:
- focus: computer vision – features for detection, evolution of categorization

November 22, 2010:
- focus: human vision – research directions

November 15: we had presentations
- focus: computer vision – methods
- lead by Konstantine

November 8, 2010:
holiday

November 1, 2010:
- focus: intro to machine learning – methods
- lead by Sean

October 21:
- introductory meeting
- group discussion
- administrative issues