dc.rights.license | CC-BY-NC-ND | |
dc.contributor | Dr. Melisachew Wudage Chekol,
Prof. Dr. Yannis Velegrakis,
Dr. Klamer Schutte | |
dc.contributor.advisor | Chekol, Mel | |
dc.contributor.author | Simões Valente, Miguel | |
dc.date.accessioned | 2022-03-05T00:00:36Z | |
dc.date.available | 2022-03-05T00:00:36Z | |
dc.date.issued | 2022 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/559 | |
dc.description.abstract | Current studies in Zero-Shot Learning for image classification use a weak Zero-Shot condition by using curated attributes as semantics to guide the classification of unseen images. Instead, this work assumes a strict Zero-Shot condition by using the readiest at hand data as guiding semantics, raw text from Wikipedia. The Zero-Shot condition itself is solved by filling in the gap of the missing visual data with generated data, essentially simulating what is missing in hopes of classifying it when | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | The thesis tackled Zero-Shot Learning by simulating or generating missing data with Generative Neural Networks made of Normalizing Flows. For that, I tried to establish a connection between semantics extracted from text, using several text encoding methods and visual features obtained from convolutional neural networks. In the end, the approach proved to be possible in the limited domain of the CUB2011 and ImageNet. | |
dc.title | Generative Based Zero-Shot Learning: Classifying Images from Text | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 2582 | |