A CRF that combines tactile sensing and vision for haptic mapping
Asoka Kumar Shenoi, Ashwin Kumar
MetadataShow full item record
We consider the problem of enabling a robot to efficiently obtain a dense haptic map of its visible surroundings Using the complementary properties of vision and tactile sensing. Our approach assumes that visible surfaces that look similar to one another are likely to have similar haptic properties. In our previous work, we introduced an iterative algorithm that enabled a robot to infer dense haptic labels across visible surfaces in an RGB-D image when given a sequence of sparse haptic labels. In this work, we describe how dense conditional random fields (CRFs) can be applied to this same problem and present results from evaluating a dense CRF’s performance in simulated trials with idealized haptic labels. We evaluated our method using several publicly available RGB-D image datasets with indoor cluttered scenes pertinent to robot manipulation. In these simulated trials, the dense CRF substantially outperformed our previous algorithm by correctly assigning haptic labels to an average of 93% (versus 76% in our previous work) of all object pixels in an image given the highest number of contact points per object. Likewise, the dense CRF correctly assigned haptic labels to an average of 81% (versus 63% in our previous work) of all object pixels in an image given a low number of contact points per object. We compared the performance of dense CRF using uniform prior with a dense CRF using prior obtained from the visible scene using a Fully Convolutional Network trained for visual material recognition. The use of the convolutional network further improves the performance of the algorithm. We also performed experiments with the humanoid robot DARCI reaching in a cluttered foliage environment while using our algorithm to create a haptic map. The algorithm correctly assigned the label to 82.52% of the scenes with trunks and leaves after 10 reaches into the environment.