Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDignum, F.P.M.
dc.contributor.advisorZambetta, F.
dc.contributor.advisorThangarajah, J.
dc.contributor.authorHof, W. van 't
dc.date.accessioned2018-12-18T18:00:38Z
dc.date.available2018-12-18T18:00:38Z
dc.date.issued2018
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/31527
dc.description.abstractExploration has shown to be difficult in games where the reward space is sparse. The agent has trouble reaching any reward and therefore cannot learn a good policy. One recent approach to this problem is to assist the agent in finding the reward through means of creating subgoals. Subgoals are states for which the agent receives an intrinsic reward for reaching it. This motivates the agent to reach certain areas and indirectly explore more of the environment. While this approach sounds intuitive and shows promise, the method has its flaws. In this thesis, the flaws of this method have been examined and multiple methods to improve the performance have been explored. The representation of the intrinsic rewards has been altered and has shown success. The other methods alter the constraints on which the subgoals are created, namely the relative distance and the visit rate. They both do not improve the performance, but they do improve the quality of the subgoals.
dc.description.sponsorshipUtrecht University
dc.format.extent1345758
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleExploration in Sparse Reward Games Examining and improving Exploration Effort Partitioning
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsReinforcement Learning, Sparse Reward Space, Exploration
dc.subject.courseuuArtificial Intelligence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record