Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPoppe, Ronald
dc.contributor.authorRutten, Sam
dc.date.accessioned2024-07-26T00:00:56Z
dc.date.available2024-07-26T00:00:56Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/46942
dc.description.abstractDeep-learning based denoising models are used to replace traditional denoising methods because of their better generalization ability and accuracy. Generating realistic pair wise data is important for the accuracy of these deep denoising models on real-world noisy sceneries. Most deep denoising works are focussed on the accuracy of the model, not taking the efficiency into account. Transformer models are the state-of-the-art performing denoising models. These transformer models are computationally too heavy for real-time denoising. Knowledge distillation can be used for compressing these models without losing much of the accuracy performance. We show that training deep denoising models on real-world noise model image pairs results in a good performance on the generated test set, and on real sensor noise image. Further, we show that the teacher-student architecture with knowledge distillation improves the accuracy of the student network. These student models gain a lot of efficiency without losing much of the teacher model accuracy, creating a better efficiency-accuracy trade-off for real-world image denoising.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectToward Efficient Raw Image Denoising with Foundation Models
dc.titleToward Efficient Raw Image Denoising with Foundation Models
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsDenoising; Computer vision; Deep learning; Vision transformers; Knowledge distillation; real-world noise generation;
dc.subject.courseuuArtificial Intelligence
dc.thesis.id34943


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record