LaurenceDevillersName : Laurence Devillers

Institution : University Paris-XI, LIMSI-CNRS

Laurence Devillers is assistant professor since 1995 at the Computer Science Division of the University Paris-XI and researcher of the LIMSI-CNRS in France (in the Spoken Language Processing group). She obtained her HDR (French “Habilitation ŕ Diriger des Recherches”) the December 4th 2006 on “Emotion in human-machine interaction: perception, detection and generation”.

Her research activities have been focused on several aspects of speech processing: speech recognition then speech understanding and spoken dialog. Since 2000, her work has been following a new research orientation on the representation and detection of emotional states in human-human and human-machine interaction. Her research goal on emotions consists on the one hand in describing and modelling expression of emotional behaviours from authentic data, in studying the variability of oral signals between speakers (in much longer term with multimodal data for different languages and cultures), and on the other hand in improving speech technologies and, more widely in building oral and multimodal systems « affectively intelligent». The field of her research on emotions stretches from emotional expression in the voice to multimodal expression up to emotional and mental states in interaction situation. The originality of her work is based on the use of real-life data where emotions are natural.

She is actually heading a research topic on “Emotion and Speech” and is directing (or have been directing) 5 PHD theses in the field of affective computing. She was involved in the international NoE FP6-HUMAINE and in the W3C Emotion incubator group as expert on emotion annotation. She is now member to the Executive committee of the HUMAINE association. She has a lot of collaborations with several international teams (UNIGE, Belfast University, ATR, Erlangen University, etc.) and also with industrial partners (THALES, Vecsys, La Cantoche, etc.). She is responsible of a new French national project (ANR) on Affective Avatars (2007-2009) and she is involved into two other projects: ROMEO (interaction with a humanoid robot) and Voxfactory (user satisfaction detection).

Title of Project : Automatic detection of emotion-related expressions from real-life spoken dialogs

This decade has seen an upsurge of interest in affective computing. Speech and Language are among the main channels to communicate human affective states. Affective Speech and language processing can be used alone or coupled with other channels in many systems such as call centers, robots, artificial animated agents for telephony, education, games, medical or security applications. Affective corpora are then fundamental both to developing sound conceptual analyses and to training these 'affective-oriented systems' at all levels - to recognise user affect, to express appropriate affective states, to anticipate how a user in one state might respond to a possible kind of reaction from the machine, etc. In this talk, I will focus on the study of the vocal expression of “emotion” in real-life spoken interactions in order to build emotion detection system.