The goal for this week was to get ready for my alpha review on Friday. My plan was to have one controller working from end to end - a physical input device that gets process by an ADC, then sent to a synthesis program on the Raspberry Pi to be rendered as sound data and output through the headphone jack.
Now that I have a basic sensor working I can start experimenting with the controller mapping. Currently the sound has three channels of input - frequency, timbre, and loudness. Frequency is mapped to the Y position of the thumbstick, timbre to the X position, and loudness to the distance from the thumbstick position to its neutral resting point. I defined a deadzone around the resting point to prevent unwanted triggers. The Pure Data synthesis patch I wrote receives messages via OSC (works much faster than I expected), and processes the output as a mix between a sine wave and a sawtooth wave.
I recorded a video of the prototype for your viewing pleasure.