Unsupervised State-space Decomposition in Hierarchical Reinforcement Learning
Duque Van Hissenhoven, Juan Agustin
MetadataShow full item record
We intend to develop a framework that allows to determine sub goals for hierarchical reinforcement learning tasks in an unsupervised manner. The motivation for this research project is to make hierarchical reinforcement learning algorithms independent of human input (i.e. the sub goals, which must be handpicked by the algorithm designers). It would be interesting to determine whether the unsupervised goal determination converges towards the optimal solution and has any impact in the running performance of the algorithm. To create these sub goals, we will discretize the state space of the problem up to a given granularity and create an adjacency matrix between clusters over which we can then utilize spectral graph partitioning to determine the goals.