Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDignum, dr. F.P.M.
dc.contributor.advisorPoppe, dr. Ir. R.W.
dc.contributor.authorHeer, P.B.U.L. de
dc.date.accessioned2018-09-03T17:00:33Z
dc.date.available2018-09-03T17:00:33Z
dc.date.issued2018
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/30803
dc.description.abstractTo train a robot to autonomously perform actions in an environment, it needs to learn what is and isn’t effective. One way to do this is using Reinforcement Learning, where it learns through trial and error. This approach is generic, but requires a lot of training. Using recent developments in Hierarchical- and Deep Reinforcement Learning, much more complex domains come within the realm of possibilities. This study implemented the newest algorithms in these domains and tested them on multiple tasks, including new dynamic tasks, in Minecraft, a complex 3D game environment. The results show that dynamic tasks can very suitably be learnt using these techniques, but that Minecraft has its limitations as an AI simulator. For future work, we advise to further explore the possibility of using parallelized learning algorithms, and further implementation of the hierarchical network.
dc.description.sponsorshipUtrecht University
dc.format.extent1103997
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleDeep Learning of Hierarchical Skills for Dynamic Tasks in Minecraft
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsMachine Learning, Reinforcement Learning, Deep Learning, Neural Networks, Minecraft, Deepmind, Hierarchy, Hierarchical Reinforcement Learning, Skills, Tasks, Dynamic
dc.subject.courseuuGame and Media Technology


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record