Feature-based synthesis for sonification and psychoacoustic research
Abstract
We present a general framework for synthesizing audio manifesting arbitrary sets of perceptually motivated, quantifiable acoustic features. Much work has been done recently on finding acoustic features that describe perceptually relevant aspects of sound. The ability to synthesize sounds defined by arbitrary feature values would allow perception researchers to more directly generate stimuli ``to order,'' as well as providing an opportunity to directly test the perceptual relevance and characteristics of such features. The methods we describe also provide a straightforward way of approaching the problem of mapping from data to synthesis control parameters for sonification.