I have a bazillion tabs open in Chrome right now, and my Raspberry Pi is chugging away downloading software updates and libraries. Though progress hasn't been as fast as I'd like, I have a clearly-defined goal for the alpha review, and a much better sense of the technology I need.
I spent some time working out some of the hardware constraints. Last week I talked about a scaled down interface of 36 keys and the possibility of using thumbstick controllers as input mechanisms. I bought a pair of them (they're meant to be replacements for an Xbox 360 controller). The thumbstick is a pair of potentiometers (analog inputs), and the stick can be clicked downwards (a digital input). With ideally 36 of these devices, I'll need to process 72 analog inputs.
I've been thinking a lot about the interactions a thumbstick controller will encourage. It's a familiar device and implies smooth, continuous, multi-directional control, but also invites high-speed wiggling. The sound parameters will have to be mapped to the device in a way that high-intensity movement will still result in pleasing sounds. I think the mapping should be more complex than a simple x-y controller, but I'll have to actually play with it and see for myself.
My goal for this week was to hook this guy up to the RPi and try to send a midi note to my computer. In doing the research for this task I learned a lot about the architecture and capabilities of the RPi. For one, I found another killer music tech blog featuring a really awesome Raspberry-Pi-driven synth. The Korg interface in that video is controlling a synthesis program running on the Raspberry Pi. I also found out that CSound and Pure Data (programming languages for data-driven music apps) run on the Pi. Pure Data can even access the GPIO pins with some hacks. With this software installed I should be able to write a simple synthesis patch that can interact with the hardware. This would me to develop the hardware and low-level software without having to worry about the MIDI protocol right away. The MIDI connectivity can be a stretch goal after the hardware concept is shown to work.
This brings us to the first big technical hurdle. Since the Raspberry Pi only has digital input pins, I need to use an Analog-to-Digital Converter (ADC) to turn the continuous analog voltage into a digital signal that the RPi can use. ADCs come in different resolutions, and I figure I'll need at least 10 bits (values up to 1023) to get some juicy, expressive sounds out of this instrument. There are also 12-bit ADCs, which would be sweet, but probably overkill.
Since the RPi only has 12 input pins available on the GPIO, if I'm reading 10 bits of input for 72 channels, I'll need to use a multiplexer to read each of the 72 inputs sequentially (I'm still researching this, but I think the SPI bus can help here). I borrowed a Texas Instruments ADC0804 from Detkin to experiment with for the weekend and learned that it can do the conversion in 100 microseconds (though the speed would vary with different chips). That makes for 10,000 conversions per second. Since I'd have to read each of the 72 channels through the multiplexer, the effective total read rate for all the analog inputs is 138 Hz (7.2 ms per read). The rule of thumb for music devices is that the smallest delay a human ear can detect is about 11 milliseconds. These off-the-cuff calculations suggest this device would have an input-read delay about 4 ms shorter than that, though it would need extra time for signal processing and audio production. Fortunately, the Raspberry Pi has a beefy processor. If the synthesis program is lightweight enough, hopefully it won't be an issue.
The project could be architected in a different manner. I contacted Andrew McPherson for some advice about a project like this, and he recommended looking at the STM32F4 series microcontrollers. After a lot of research I found the STM32F407, which has three 12-bit ADCs - with some multiplexing it would make pretty short work of those 72 inputs. The microcontroller itself is only $15. This platform would require some tradeoffs. While it's more convenient for input, I would definitely need to relay the data elsewhere for sound production. I would also need to develop in windows (yuck). I'm going to meet with Rahul on Monday to discuss some of these ideas.
For now, I think it's perfectly valid to stick with RPi, especially considering the initial feasibility calculations. I also found a product called the Gertboard - it's essentially an I/O breakout board for the Raspberry Pi. It has a built-in 2-channel 10-bit ADC, in addition to 12 buffered digital I/O pins and an ATmega microprocessor for more direct timing-critical control. It could make this project a whole lot easier.
My work this week was focused mostly on a hardware "Hello World" program. I set up a simple circuit on a breadboard, connected to the Raspberry Pi with a GPIO extension cable. I installed both Pure Data and CSound on the Pi and made sure they work and can send sound over the built-in audio jack. Then I hooked up a light and a button and wrote a program that can turn the light on and off when button click signals are received at the GPIO. While writing the program I had the opportunity to play around with simple GPIO controls and the C and Python libraries that allow hardware access in software. I wanted to also write a program that will trigger a sound on a button click, but it's a little more complex than I expected, and I intend to finish that project in the next day or so. Now that the whole system is set up, it's much easier to work on. The next steps are solving the ADC problem, either by purchasing ADC chips for use with the Raspberry Pi, or getting a microcontroller with integrated ADCs, to be used either as a slave to the Pi, or as a standalone platform for the project.
Below is a photo of the project board before I added the button (I'm away from my computer right now and don't have a picture, but I'll upload one later). The light is actually being controlled remotely by my laptop. The code for the project was synced to the Pi via git, which works, but is a bit clumsy. I should really learn vim and develop right on the Pi...
Edit: I forgot to mention the goal for alphas. My goal is to have one note working - one full set of controls that work with all the parameters for a single note, and a half-decent synth patch to demo it.