Direct Superpixel Labeling for Mobile Robot Navigation Using Learned General Optical Flow Templates
Abstract
Towards the goal of autonomous obstacle avoidance for mobile robots, we present a method for superpixel
labeling using optical flow templates. Optical flow provides a
rich source of information that complements image appearance
and point clouds in determining traversability. While much past
work uses optical flow towards traversability in a heuristic manner, the method we present here instead classifies flow according to several optical flow templates that are specific to the typical environment shape. Our first contribution over prior
work in superpixel labeling using optical flow templates is large
improvements in accuracy and efficiency by inference directly
from spatiotemporal gradients instead of from independently-
computed optical flow, and from improved optical flow modeling
for obstacles. Our second contribution over the same is extending superpixel labeling methods to arbitrary camera optics
without the need to calibrate the camera, by developing and
demonstrating a method for learning optical flow templates
from unlabeled video. Our experiments demonstrate successful obstacle detection in an outdoor mobile robot dataset.