dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Löffler, M. | |
dc.contributor.advisor | Alakuijala, J. | |
dc.contributor.author | Asseldonk, R.R.L.J. van | |
dc.date.accessioned | 2018-06-25T17:01:29Z | |
dc.date.available | 2018-06-25T17:01:29Z | |
dc.date.issued | 2018 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/29174 | |
dc.description.abstract | Modern neural network architectures can have as many as hundreds of millions of parameters. This makes them power and memory-hungry, and impedes running networks on resource-constrained devices such as phones. Sparse networks can achieve performance similar to that of dense networks, with a fraction of the parameters. However, sparsification is usually done as an afterthought, without benefits in the learning phase. In this thesis, we propose to simultaneously optimise network architecture and parameters. Apart from sparsity benefits, this eliminates the need to choose a particular network architecture in advance. | |
dc.description.sponsorship | Utrecht University | |
dc.format.extent | 1305441 | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | |
dc.title | Online Learning of Sparse Network Architectures | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | machine learning, graphs, architecture learning, metalearning, sparsification | |
dc.subject.courseuu | Game and Media Technology | |