Learning Nash equilibria in zero-sum stochastic games via entropy-regularized policy approximation
MetadataShow full item record
In this thesis, we explore the use of policy approximation for reducing the computational cost of learning Nash Equilibria in Multi-Agent Reinforcement Learning. Existing multi-agent reinforcement learning methods are either computationally demanding or do not necessarily converge to a Nash Equilibrium without additional stringent assumptions. We propose a new algorithm for zero-sum stochastic games in which each agent simultaneously learns a Nash policy and an entropy-regularized policy.The two policies help each other towards convergence: the former guides the latter to the desired Nash equilibrium, and the latter serves as an efficient approximation of the former. We demonstrate the possibility of transferring previous training experience to a different environment, which enables the agents to adapt quickly. We also provide a dynamic hyper-parameter scheduling scheme for further expedited convergence. Empirical results applied to a number of stochastic games show that the proposed algorithm converges to the Nash equilibrium while exhibiting an order of magnitude speed-up over existing algorithms.