Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorGauthier, David
dc.contributor.authorSahu, Kshitij
dc.date.accessioned2025-08-20T23:01:17Z
dc.date.available2025-08-20T23:01:17Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/49799
dc.description.abstractIn the age of digital music streaming, the ability to automatically understand and organize audio content is critical for applications such as recommendation, retrieval, and genre detection. This thesis introduces a self-supervised learning approach for extracting semantic embeddings from raw audio waveforms using a hybrid CNN-Transformer model. These embeddings are trained using contrastive learning Barlow Twins, and are intended for use in content-based music recom- mendation systems. By combining local acoustic detail extraction via CNNs with sequence modelling via Transformers, the system learns rich representations with- out labelled data. We evaluate the embeddings using t-SNE visualizations, FAISS- based similarity retrieval, and through a prototype interactive recommendation demo. The results demonstrate the effectiveness of our approach in organizing music meaningfully and enabling cold-start recommendation without user history.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectWhat you hear is what you get: An exploration of audio level feature extraction for music recommendations
dc.titleWhat you hear is what you get: An exploration of audio level feature extraction for music recommendations
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuApplied Data Science
dc.thesis.id52109


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record