The Motion Grammar: Linguistic Perception, Planning, and Control
Abstract
We present the Motion Grammar: a novel unified representation
for task decomposition, perception, planning, and hybrid control
that provides a computationally tractable way to control robots in
uncertain environments with guarantees on completeness and correctness.
The grammar represents a policy for the task which is
parsed in real-time based on perceptual input. Branches of the syntax
tree form the levels of a hierarchical decomposition, and the
individual robot sensor readings are given by tokens. We implement
this approach in the interactive game of Yamakuzushi on a
physical robot resulting in a system that repeatably competes with
a human opponent in sustained game-play for matches up to six
minutes.