Show simple item record

dc.contributor.advisorParikh, Devi
dc.contributor.authorShrivastava, Ayush
dc.date.accessioned2021-06-10T16:51:16Z
dc.date.available2021-06-10T16:51:16Z
dc.date.created2021-05
dc.date.issued2021-05-04
dc.date.submittedMay 2021
dc.identifier.urihttp://hdl.handle.net/1853/64710
dc.description.abstractA visually-grounded navigation instruction can be interpreted as a sequence of expected observations and actions an agent following the correct trajectory would encounter and perform. Based on this intuition, we formulate the problem of finding the goal location in Vision-and-Language Navigation (VLN) within the framework of Bayesian state tracking -- learning observation and motion models conditioned on these expectable events. Together with a mapper that constructs a semantic spatial map on-the-fly during navigation, we formulate an end-to-end differentiable Bayes filter and train it to identify the goal by predicting the most likely trajectory through the map according to the instructions. The resulting navigation policy constitutes a new approach to instruction following that explicitly models a probability distribution over states, encoding strong geometric and algorithmic priors while enabling greater explainability. Our experiments show that our approach outperforms a strong LingUNet baseline when predicting the goal location on the map. On the full VLN task, i.e. navigating to the goal location, our approach achieves promising results with less reliance on navigation constraints. In the second half of the thesis, we study the challenging problem of releasing a robot in a previously unseen environment, and having it follow unconstrained natural language navigation instructions. Recent work on the task of VLN has achieved significant progress in simulation. To assess the implications of this work for robotics, we transfer a VLN agent trained in simulation to a physical robot. To bridge the gap between the high-level discrete action space learned by the VLN agent, and the robot's low-level continuous action space, we propose a subgoal model to identify nearby waypoints, and use domain randomization to mitigate visual domain differences. For accurate sim and real comparisons in parallel environments, we annotate a 325m2 office space with 1.3km of navigation instructions, and create a digitized replica in simulation. We find that sim-to-real transfer to an environment not seen in training is successful if an occupancy map and navigation graph can be collected and annotated in advance (success rate of 46.8% vs. 55.9% in sim), but much more challenging in the hardest setting with no prior mapping at all (success rate of 22.5%).
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectVision-and-Language Navigation, VLN, Instruction Following, Grounded Language Navigation, Sim2Real Transfer, Robot, Mapping, Filter, Bayesian State Tracking
dc.titleBayesian State Tracking and Sim-to-Real Transfer for Vision-and-Language Navigation
dc.typeThesis
dc.description.degreeM.S.
dc.contributor.departmentComputer Science
thesis.degree.levelMasters
dc.contributor.committeeMemberBatra, Dhruv
dc.contributor.committeeMemberLee, Stefan
dc.date.updated2021-06-10T16:51:17Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record