Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPaperno, Dr. D.
dc.contributor.authorBezema, D.L.
dc.date.accessioned2019-08-02T17:01:10Z
dc.date.available2019-08-02T17:01:10Z
dc.date.issued2019
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/33151
dc.description.abstractIn this study the generalization capacity of Convolutional Neural Networks (CNNs) for interpreted languages is investigated. Two CNN models, one of which included a curriculum, are trained on two interpreted languages of different complexity. The results show that a CNN, contrary to previous findings for Long-Short-Term-Memory Networks, does not benefit from a curriculum during training. Performance of models on the more complex interpreted language shows adequate generalization ability, while performance on the less complex language shows no generalization ability at all. This suggests that a CNN prefers complex training data over less complex training data, for it forces the model to capture more generally applicable features from which it benefits during testing. Overall the reported results of this study show that CNNs possess a generalization capacity for interpreted languages that is competitive with Recurrent and Recursive models from the literature.
dc.description.sponsorshipUtrecht University
dc.format.extent414619
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleInvestigating The Generalization Ability Of Convolutional Neural Networks For Interpreted Languages
dc.type.contentBachelor Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsConvolutional Neural Networks, generalization capacity, interpreted languages
dc.subject.courseuuKunstmatige Intelligentie


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record