dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Yumak, dr. Z. | |
dc.contributor.author | Sietsma, L.H. | |
dc.date.accessioned | 2020-02-20T19:03:47Z | |
dc.date.available | 2020-02-20T19:03:47Z | |
dc.date.issued | 2019 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/34859 | |
dc.description.abstract | We introduce a new method to create a facial an-imation controller. We find a high-level control-space bottom-up from data using the generativepart of a Wasserstein Generative Adversarial Net-work (WGAN). By training a WGAN on facetracking data from the IEMOCAP corpus, we showthat a WGAN is able to learn the behavior of thehuman face. By training the WGAN on differentemotions, we show that the WGAN is successful atlearning human face movement matching the emo-tions that it was trained on. We also analyse the be-havior of the latent space. We found that the gen-erator provides control over certain aspects of theface and sometimes even relates to emotions. Byimplementing sliders for the latent space variableswe were able to create a facial animation controllerusing the generative part of the WGAN. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | en | |
dc.title | Facial Animation Controller Using Generative Adversarial Networks | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | Machine Learning, Facial Animation, Generative Adversarial Networks | |
dc.subject.courseuu | Game and Media Technology | |