Skin Pattern Sonification Using NMF-based Visual Feature Extraction and Learning-based PMSon
Abstract
This paper describes the use of sonification to represent the
scanned image data of skin pattern of the human body. Skin
Patterns have different characteristics and visual features
depending on the positions and conditions of the skin on the
human body. The visual features are extracted and analyzed for
sonification in order to broaden the dimensions of data
representation and to explore the diversity of sound in each
human body. Non-negative matrix factorization (NMF) is
employed to parameterize skin pattern images, and the
represented visual parameters are connected to sound
parameters through support vector regression (SVR). We
compare the sound results with the data from the skin pattern
analysis to examine how much each individual skin patterns are
effectively mapped to create accurate sonification results. Thus,
the use of sonification in this research suggests a novel
approach to parameter mapping sonification by designing
personal sonic instruments that use the entire human body as
data.