Facial Animation Controller Using Generative Adversarial Networks
Summary
We introduce a new method to create a facial an-imation controller. We find a high-level control-space bottom-up from data using the generativepart of a Wasserstein Generative Adversarial Net-work (WGAN). By training a WGAN on facetracking data from the IEMOCAP corpus, we showthat a WGAN is able to learn the behavior of thehuman face. By training the WGAN on differentemotions, we show that the WGAN is successful atlearning human face movement matching the emo-tions that it was trained on. We also analyse the be-havior of the latent space. We found that the gen-erator provides control over certain aspects of theface and sometimes even relates to emotions. Byimplementing sliders for the latent space variableswe were able to create a facial animation controllerusing the generative part of the WGAN.