> hsm

with the haptic sound machine, we wanted people to enjoy interacting with sound through gestures and haptics in playful and easy way, without any kind of background knowledge. there is no right or wrong, only immediate, synchronous feedback. we focus on the experience itself rather than on creating harmonious sounds. using fabric in this unusual context enables a close relationship between movement, sonority and the sense of touch so that the haptical vocabulary which one needs to play an instrument is minimal.

user testing the hsm

01 - live performance

we were allowed to present the haptic sounds at the "media day" at hfu. here is a short trailer


02 - about

the haptic sound machine is made of a wooden box with fabric tightened on a detachable top layer. the heart of the machine (on the inside) is a depth camera (in our case it was a microsoft kinect), connected to a macbook pro. we used processing and max msp to process the signals coming from the depth camera to modulating the sound. the graphic shown below (fetzner, friedmann 2007) visualizes the interaction between the user and the sound machine.

interaction as a sphere of activity fetzner friedmann 2007

> interaction as a sphere of activity (Fetzner, Daniel; Friedmann, Bruno: Interaktionsstrukturen mit digitalen Medien – Technologien im Labor Neue Medien (LNM) der Hochschule Furtwangen. In: Bild - Raum - Interaktion - Angewandte empirische Wirkungsforschung, Arbeitspapier Nr. 4, Fakultät Digitale Medien, Hochschule Furtwangen University, 2007.)

reference visual auditive scene as seen from the user's perspective
the actual situation

hc interface
the fabric in connection with the depth camera

Scene Control
code within processing ide, communication between processing and max msp patches combined with data from the kinect

Scene analysis
the user analysis the result (feedback) of their interaction with the interface and the therefrom resulting scene. they deduct Δs from their reference vas and the actually resulting vas.

Δs
based on our installation, Δs describes the amount of experience gained from using the machine iteratively. given the immediate feedback time of the hsm and the resulting experience gained, the declining Δs leads to a raise of immersiveness.

03 - technical overview

as mentioned in the first description, the hsm was realized using a depth camera, processing and max msp.

processing

processing was responsible for retrieving data from the kinect (v1), packaging it in blobs and evaluating them accordingly to resolve the x and y position of the respective blobs it then sends the data via open sound control (osc) to max. we used oscp5 for udp/osc data transmission, simpleopenni to receive kinect data and blobdetection, which detects blobs. if you want to know more about the processing code, just hit me a mail: hi@danhep.de

max

screenshot from max msp interactive audio

> event processing within max

04 - early prototype

Prototypes are an fast and easy way to figure out if the idea grown in your head can be realized before investing a huge amount of time (and in some cases money). so here we used some cardbord boxes and a bedsheet later replaced by the wooden table and another fabric. thanks again to the smart space center (in particular prof. dr. matthias wölfel) for lending the kinect and some other good ideas!

early prototype of our haptic sound machine

> user testing the prototype for the first time