dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Wijnholds, G.J. | |
dc.contributor.advisor | Broersen, J.M. | |
dc.contributor.author | Teerlink, Y.K. | |
dc.date.accessioned | 2021-04-06T18:00:16Z | |
dc.date.available | 2021-04-06T18:00:16Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/39213 | |
dc.description.abstract | Compared to neural networks (NN), humans can learn new concepts using only very little data. The ability to learn so efficiently might be due to the use of ab- stractions. To find similarities between human and machine learning this research will analyze if NN benefits from syntactic information during training. We will aim to answer the following question: How does enriching training data with syntactic knowledge affect the performance of a NN on natural language processing tasks? This research examines the results of Long Short Term Memory models (LSTM) trained on two different types of datasets; one without Part of Speech tags (a form of abstract knowledge) and a dataset that is supplemented with POS-tags. The results show that an LSTM trained on a relatively small dataset supplemented with POS- tags outperforms an LSTM trained on a regular dataset. The increase in performance might suggest that neural networks benefit from abstract information, which in turn might show some similarities in the way humans and machines learn. | |
dc.description.sponsorship | Utrecht University | |
dc.format.extent | 608483 | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.title | Enriching training data with syntactic knowledge and the effect on performance of a neural network on natural language processing tasks | |
dc.type.content | Bachelor Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.courseuu | Kunstmatige Intelligentie | |