Designing Speech and Language Interactions Workshop

CHI 2014, Toronto, Canada

Organizers

Dr. Cosmin Munteanu is a Research Officer with the National Research Council (NRC) Canada and an Adjunct Professor at the University of Toronto. His area of expertise is at the intersection of ASR and HCI, having extensively studied the human factors of using imperfect speech recognition systems, and having designed and evaluated systems that improve humans' access to and interaction with information-rich media and technologies through natural language. Cosmin currently oversees the NRC's Voice and Multimodal Lab where he leads several industrial and academic research projects exploring spoken language interaction for mobile devices and mixed reality systems.

Prof. Matt Jones is a Professor of Computer Science at the Future Interaction Technology Lab at Swansea University. He has worked on mobile interaction issues for the past seventeen years and has published a large number of articles in this area, including co-authoring the "Mobile Interaction Design" book. He has had many collaborations and interactions with handset and service developers. He has been a Visiting Fellow at Nokia Research and held an IBM Faculty Award to work with the Spoken Web group in IBM Research India. He is an editor of the International Journal of Personal and Ubiquitous Computing and on the steering committee for the Mobile Human Computer Interaction conference series. His research work has focussed on the fusion of physical and digital spaces in challenging contexts, recently being involved in projects such as exploring the role of haptics, gestures and audio in mobile scenarios and storytelling in rural Indian Village.

Prof. Steve Whittaker is Full Professor in Psychology at University of California at Santa Cruz. His research interests are in the theory and design of collaborative systems, CMC, speech browsing and personal information management. He has designed many novel systems, including: lifelogging systems, one of the first IM clients, shared workspaces, social network email clients, meeting capture systems, and various tools for accessing and browsing speech. He has previously worked at Sheffield University, Hewlett Packard, Bell Labs, AT&T, Lotus and IBM Cambridge and IBM Almaden Labs. He is a member of the Association of Computational Machinery Computer Human Interaction Academy. Right now he is working on digital tools to support human memory.

Prof. Gerald Penn is a Professor of Computer Science at the University of Toronto. His area of expertise is in the study of human languages, both from a mathematical and computational perspective. Gerald is one of the leading scholars in Computational Linguistics, with significant contributions to the formal study of natural languages. His publications cover many areas, from Theoretical Linguistics, to Mathematics, and to ASR, as well as HCI.

Dr. Sharon Oviatt is well known for her research on human-centred interfaces, multimodal and mobile interfaces, and educational interfaces. She has published over 130 scientific articles, and is an Associate Editor of the main journals in the field of HCI. She was the recipient of a National Science Foundation Special Creativity Award for pioneering work on mobile multimodal interfaces that combine natural input modes like speech, pen, touch, and gesture. Recently, she founded Incaa Designs (http://www.incaadesigns.org/), a nonprofit that researches and evaluates new educational interfaces designed to stimulate thinking and reasoning.

Prof. Stephen Brewster is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK, where he leads the Multimodal Interaction Group (part of the Glasgow Interactive Systems Group). His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. Stephen has conducted significant research into Earcons, a particular form of non-speech sounds. He has authored numerous publications in the fields of audio and haptic (touch-based) interaction and mobile computing devices.

Dr Matthew Aylett has been involved in speech technology and HCI since 1994. He obtained an MSc in speech and language processing (Distinction) from the University of Edinburgh in 1995. Subsequently he worked as a research associate on spoken dialogue whilst pursuing a PhD (awarded in 2000) focused on phonetic and prosodic analysis of spontaneous speech. In April 2000, he joined the R&D team of Edinburgh University spin-out Rhetorical Systems Ltd. He played a fundamental role in both designing and building the rVoice speech synthesizer. Other key contributions included work on prosodic modelling and intelligibility. He continued to publish research work over this period at an international level. In 2006 he founded Cereproc Ltd, which in 2007 released the first commercial synthesis to allow modification of voice quality for adding underlying emotion to voices.

Dr. Nicolas d'Alessandro is a postdoctoral researcher at the numediart Institute, University of Mons, Belgium. From a lifelong interest in musical instruments and his acquired taste in speech and singing processing, he has incrementally shaped a research topic that aims at using gestural control of sound in order to gain insights in speech and singing production. As a performer, his interventions on stage always gather an interdisciplinary research side, taking its roots in human cognition and voice production. He has coordinated and contributed to the development of five main speech/singing systems: MBROLA, HandSketch, DiVA, ChoirMob and MAGE.