Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorWang, Shihan
dc.contributor.authorWessels, Thomas
dc.date.accessioned2025-05-01T00:01:09Z
dc.date.available2025-05-01T00:01:09Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48883
dc.description.abstractIn multi-agent tasks with heterogeneous agents, effective solutions may rely on the ability of agents to behave differently. While such heterogeneous multi-agent systems are common, only a minority of Multi-Agent Reinforcement Learning (MARL) methods focus on this heterogeneous setting. When agents are heterogeneous, widely used techniques such as parameter sharing become detrimental to the learning of optimal policies. By using parameter sharing, agents effectively learn a shared policy, which limits their ability to behave differently. MARL solutions that try to effectively solve heterogeneous multi-agent systems therefore suffer on the scalability of their method, rendering them ineffective for large-scale settings. In this thesis the HCL framework is introduced, which aims to solve the two-sided problem of ensuring diverse agent behaviour and scalable learning for heterogeneous MARL. HCL overcomes the limitations that plague many MARL methods in heterogeneous multi-agent systems by learning distinct representations of environment observations for different agent types through contrastive learning. Because the learning of these representations is decoupled from MARL, HCL is able to use parameter sharing without suffering on diversity in agent behaviour. Through an experimental analysis on two heterogeneous multi-agent systems, we show that the use of distinct representations per agent type enhances the quality of the learned agent behaviour. Additionally, our results show that representation learning can be applied in novel ways to improve the performance of MARL compared to existing applications.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectA study on the usefulness of learning distinct representations for heterogeneous agents through contrastive learning in multi-agent deep reinforcement learning.
dc.titleUsing Representation Learning for Scalable Multi-Agent Reinforcement Learning in Heterogeneous Multi-Agent Systems
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsreinforcement learning, multi-agent reinforcement learning, multi-agent systems, representation learning, contrastive learning, heterogeneous
dc.subject.courseuuArtificial Intelligence
dc.thesis.id31119


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record