dc.description.abstract | For virtual characters to behave and communicate in a human-like manner, they should not only be able to communicate verbally, but also show and communicate with non-verbal emotions or actions like laughter. Getting a virtual character to laugh in a natural-looking way is a challenging task, because it does not only involve smiling but also typical motions of the rest of the body. This becomes even more challenging if the character is simulated in real-time and is not acting autonomously but is reacting directly to input signals like sound or a video feed. Previous approaches that work in real-time have focused on detecting laughter from sound and/or videos and converting the input signals to features of laughter in the face, but none of these include full body motions, which are equally important to a natural-looking laughter simulation. Using prerecorded or live sound of laughter as input, we directly drive synthesized breathing and facial animation and introduce laughing energy to select predefined full body animations that match in intensity to the input laughter from the sound. Using our method, it is possible to simulate natural-looking laughter on virtual characters, directly responding to input signals like sound, in applications like games or any other real-time application that involves virtual characters, contributing to more human-like behavior and a more lively interaction. | |