Generative Based Zero-Shot Learning: Classifying Images from Text
Summary
Current studies in Zero-Shot Learning for image classification use a weak Zero-Shot condition by using curated attributes as semantics to guide the classification of unseen images. Instead, this work assumes a strict Zero-Shot condition by using the readiest at hand data as guiding semantics, raw text from Wikipedia. The Zero-Shot condition itself is solved by filling in the gap of the missing visual data with generated data, essentially simulating what is missing in hopes of classifying it when