Deep Learning of Hierarchical Skills for Dynamic Tasks in Minecraft
Summary
To train a robot to autonomously perform actions in an environment, it needs to learn what is and isn’t effective. One way to do this is using Reinforcement Learning, where it learns through trial and error. This approach is generic, but requires a lot of training. Using recent developments in Hierarchical- and Deep Reinforcement Learning, much more complex domains come within the realm of possibilities. This study implemented the newest algorithms in these domains and tested them on multiple tasks, including new dynamic tasks, in Minecraft, a complex 3D game environment. The results show that dynamic tasks can very suitably be learnt using these techniques, but that Minecraft has its limitations as an AI simulator. For future work, we advise to further explore the possibility of using parallelized learning algorithms, and further implementation of the hierarchical network.