Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributorDr. Melisachew Wudage Chekol, Prof. Dr. Yannis Velegrakis, Dr. Klamer Schutte
dc.contributor.advisorChekol, Mel
dc.contributor.authorSimões Valente, Miguel
dc.date.accessioned2022-03-05T00:00:36Z
dc.date.available2022-03-05T00:00:36Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/559
dc.description.abstractCurrent studies in Zero-Shot Learning for image classification use a weak Zero-Shot condition by using curated attributes as semantics to guide the classification of unseen images. Instead, this work assumes a strict Zero-Shot condition by using the readiest at hand data as guiding semantics, raw text from Wikipedia. The Zero-Shot condition itself is solved by filling in the gap of the missing visual data with generated data, essentially simulating what is missing in hopes of classifying it when
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThe thesis tackled Zero-Shot Learning by simulating or generating missing data with Generative Neural Networks made of Normalizing Flows. For that, I tried to establish a connection between semantics extracted from text, using several text encoding methods and visual features obtained from convolutional neural networks. In the end, the approach proved to be possible in the limited domain of the CUB2011 and ImageNet.
dc.titleGenerative Based Zero-Shot Learning: Classifying Images from Text
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuArtificial Intelligence
dc.thesis.id2582


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record