The Minority Game: Individual and Social Learning
Summary
Learning has been given much attention in Artificial Intelligence (AI) and Game Theory (GT) disciplines, as it is the key to intelligent and rational behavior. However in a multiagent setting, as in Multi-agent Systems (MAS), where the environment changes according to the actions of the players, the participants cannot afford to be fully rational and resort to heuristics. In such cases classic Game Theory fails to provide convergence results of the adjustment process, thus losing predictive power. Evolutionary Game Theory (EGT), motivated from biology, has been proven suitable for analyzing bounded rationality and heuristic learning using the robust replicator dynamics. In this thesis we use a famous congestion game with many (odd) participants called the Minority Game (MG) as a learning paradigm. The most critical learning methods used in the MG are reviewed, motivated from both economics and machine learning perspective along with their results. Continuing, individual-reinforcement learning through replicator dynamics is analyzed and the asymptotic properties of the learning procedure in the MG are provided. Moreover, we compare individual learning with social learning through imitation using agent-based simulations. The two types of learning do share common convergence characteristics, but differ in the resource allocation schemes and in terms of robustness. Individual-reinforcement learning is a more utilitarian process maximizing system efficiency with disregard to single-agent performance. On the other hand, social imitation can provide a more egalitarian setting where individual scores are almost equal.