A Face-to-Face Neural Conversation Model

Hang Chu 1,2
Daiqing Li 1
Sanja Fidler1,2

1University of Toronto
2Vector Institute
CVPR, 2018


We propose to model natural language and facial gesture language together.

Neural networks have recently become good at engaging in dialog. However, current approaches are based solely on verbal text, lacking the richness of a real face-to-face conversation. We propose a neural conversation model that aims to read and generate facial gestures alongside with text. This allows our model to adapt its response based on the “mood” of the conversation. In particular, we introduce an RNN encoder-decoder that exploits the movement of facial muscles, as well as the verbal conversation. The decoder consists of two layers, where the lower layer aims at generating the verbal response and coarse facial expressions, while the second layer fills in the subtle gestures, making the generated output more smooth and natural. We train our neural network by having it “watch” 250 movies. We showcase our joint face-text model in generating more natural conversations through automatic metrics and a human study. We demonstrate an example application with a face-to-face chatting avatar.



Input: Face Video


(now: face2hank_0)

Input: Text


(now: text2hank_0)


Live chat with Hank through your webcam

Live Demo Link


Paper

Hang Chu, Daiqing Li, Sanja Fidler

A Face-to-Face Neural Conversation Model

CVPR 2018   [pdf]  

Model

Results

Dataset


MovieChat: 48K movie conversations with faces   [download]

© All rights reserved. Webpage template borrowed from Richard Zhang.