dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Wang, Shihan | |
dc.contributor.author | Wessels, Thomas | |
dc.date.accessioned | 2025-05-01T00:01:09Z | |
dc.date.available | 2025-05-01T00:01:09Z | |
dc.date.issued | 2025 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/48883 | |
dc.description.abstract | In multi-agent tasks with heterogeneous agents, effective solutions may rely on the ability of agents to behave differently. While such heterogeneous multi-agent systems are common, only a minority of Multi-Agent Reinforcement Learning (MARL) methods focus on this heterogeneous setting. When agents are heterogeneous, widely used techniques such as parameter sharing become detrimental to the learning of optimal policies. By using parameter sharing, agents effectively learn a shared policy, which limits their ability to behave differently. MARL solutions that try to effectively solve heterogeneous multi-agent systems therefore suffer on the scalability of their method, rendering them ineffective for large-scale settings. In this thesis the HCL framework is introduced, which aims to solve the two-sided problem of ensuring diverse agent behaviour and scalable learning for heterogeneous MARL. HCL overcomes the limitations that plague many MARL methods in heterogeneous multi-agent systems by learning distinct representations of environment observations for different agent types through contrastive learning. Because the learning of these representations is decoupled from MARL, HCL is able to use parameter sharing without suffering on diversity in agent behaviour. Through an experimental analysis on two heterogeneous multi-agent systems, we show that the use of distinct representations per agent type enhances the quality of the learned agent behaviour. Additionally, our results show that representation learning can be applied in novel ways to improve the performance of MARL compared to existing applications. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | A study on the usefulness of learning distinct representations for heterogeneous agents through contrastive learning in multi-agent deep reinforcement learning. | |
dc.title | Using Representation Learning for Scalable Multi-Agent Reinforcement Learning in Heterogeneous Multi-Agent Systems | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | reinforcement learning, multi-agent reinforcement learning, multi-agent systems, representation learning, contrastive learning, heterogeneous | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 31119 | |