Feature extraction from an audio stream is usually used for
visual analysis and measurement of sound. This paper seeks
to describe a set of methods for using feature extraction to
manipulate concatenative synthesis, and develops experiments
with reconfigurations of the feature-based concatenative
synthesis systems within a live, interactive context. The
aim is to explore sound creation and manipulation within an
interactive, creative, feedback loop.