Personalized Lifelong Learning from Demonstration
Jayanthi, Sravan V.
MetadataShow full item record
Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. A key challenge in LfD research is that users tend to provide heterogeneous demonstrations for the same task due to various preferences. These preferences manifest as different strategies that users utilize to complete a task. A robot that can learn these varying strategies can successfully personalize its behavior according to the desires of the expert. Therefore, it is essential to develop LfD algorithms that ensure flexibility (the robot adapts to personalized strategies), efficiency (the robot achieves sample-efficient adaptation requiring only a few demonstrations by the user), and scalability (robot reuses a concise set of strategies to represent a large amount of behaviors). In this thesis, we propose a novel algorithm, Dynamic Multi-Strategy Reward Distillation (DMSRD), which distills common knowledge between heterogeneous demonstrations, leverages learned strategies to construct mixtures policies, and continues to improve by learning from all available data. Our personalized, federated, and lifelong LfD architecture surpasses benchmarks in two continuous control problems with an average 62% improvement in policy returns, 50% improvement in log likelihood, and 36% decrease in the estimated KL divergence between learned behavior and demonstrations, alongside stronger task reward correlation and more precise strategy rewards.