Sequential Decision Making with Strategic Agents and Limited Feedback
MetadataShow full item record
Sequential decision-making is a natural model for machine learning applications where the learner must make online decisions in real time and simultaneously learn from the sequential data to make better decisions in the future. Classical work has focused on variants of the problem based on the data distribution being either stochastic or adversarial, or based on the feedback available to the learner’s decisions which could be either partial or complete. With the rapid rise of large online markets, sequential learning methods have increasingly been deployed in complex multi-agent systems where agents may behave strategically to optimize for their own personal objectives. This has added a new dimension to the sequential decision-making problem where the learner must account for the strategic behavior of the agents it is learning from who might want to steer its future decisions in their favor. This thesis aims to design effective online decision-making algorithms from the point of view of the system designers aiming to learn in environments with strategic agents and limited feedback and the strategic agents seeking to optimize personal objectives. In the first part of the thesis, we focus on repeated auctions and design mechanisms where the auctioneer can effectively learn in the presence of strategic bidders, and conversely, address how agents can bid in repeated auctions or use data-poisoning attacks to maximize their own objectives. In the second part, we consider an online learning setting where feedback about the learner’s decisions is expensive to obtain. We introduce an online learning algorithm inspired by techniques from active learning that can fast forward a small fraction of more informative examples ahead in the queue. This allows the learner to obtain the same performance as the optimal online algorithm but only by querying feedback on a very small fraction of points. Finally, in the third part of the thesis, we consider a new learning objective for stochastic multi-arm bandits that promotes merit-based fairness in opportunity for individuals and groups.