%0 Conference Proceedings %T Laugh When You’re Winning %+ Università degli studi di Genova = University of Genoa (UniGe) %+ La Cantoche Production [Paris] %+ Universität Augsburg [Augsburg] %+ University College of London [London] (UCL) %+ Laboratoire Traitement et Communication de l'Information (LTCI) %+ Université de Mons (UMons) %+ Georgia Tech Lorraine [Metz] %A Mancini, Maurizio %A Ach, Laurent %A Bantegnie, Emeline %A Baur, Tobias %A Berthouze, Nadia %A Datta, Debajyoti %A Ding, Yu %A Dupont, Stéphane %A Griffin, Harry, J. %A Lingenfelser, Florian %A Niewiadomski, Radoslaw %A Pelachaud, Catherine %A Pietquin, Olivier %A Piot, Bilal %A Urbain, Jérôme %A Volpe, Gualtiero %A Wagner, Johannes %Z Part 1: Fundamental Issues %< avec comité de lecture %( IFIP Advances in Information and Communication Technology %B 9th International Summer Workshop on Multimodal Interfaces (eNTERFACE) %C Lisbon, Portugal %Y Yves Rybarczyk %Y Tiago Cardoso %Y João Rosas %Y Luis M. Camarinha-Matos %I Springer %3 Innovative and Creative Developments in Multimodal Interaction Systems %V AICT-425 %P 50-79 %8 2013-07-15 %D 2013 %R 10.1007/978-3-642-55143-7_3 %K HCI %K laughter %K virtual characters %K game %K detection %K fusion %K multimodal %Z Computer Science [cs]Conference papers %X Developing virtual characters with naturalistic game playing capabilities is an increasingly researched topic in Human-Computer Interaction. Possible roles for such characters include virtual teachers, personal care assistants, and companions for children. Laughter is an under-investigated emotional expression both in Human-Human and Human-Computer Interaction. The EU Project ILHAIRE, aims to study this phenomena and endow machines with laughter detection and synthesis capabilities. The Laugh when you’re winning project, developed during the eNTERFACE 2013 Workshop in Lisbon, Portugal, aimed to set up and test a game scenario involving two human participants and one such virtual character. The game chosen, the yes/no game, induces natural verbal and non-verbal interaction between participants, including frequent hilarious events, e.g., one of the participants saying “yes” or “no” and so losing the game. The setup includes software platforms, developed by the ILHAIRE partners, allowing automatic analysis and fusion of human participants’ multimodal data (voice, facial expression, body movements, respiration) in real-time to detect laughter. Further, virtual characters endowed with multimodal skills were synthesised in order to interact with the participants by producing laughter in a natural way. %G English %Z TC 5 %Z WG 5.5 %2 https://inria.hal.science/hal-01350739/document %2 https://inria.hal.science/hal-01350739/file/978-3-642-55143-7_3_Chapter.pdf %L hal-01350739 %U https://inria.hal.science/hal-01350739 %~ INSTITUT-TELECOM %~ CNRS %~ UNIV-FCOMTE %~ ENST %~ SUP_IMS %~ TELECOM-PARISTECH %~ PARISTECH %~ IFIP %~ IFIP-AICT %~ CENTRALESUPELEC %~ UMI-GTL %~ UMI-COMPUTERSCIENCE %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-AICT-425 %~ IFIP-WG %~ IFIP-WG5-5 %~ IFIP-ENTERFACE %~ LTCI %~ INSTITUTS-TELECOM