In the Spring of 2012, some friends and I did an independent study focused on Web Technologies. The goal was to build a fully functioning web app as part of a team and present the final product in a demo session at the end of the semester. I was on a team with four other people, mostly Digital Media Design students, and we created Phase Change.
Phase Change is a collaborative music creation app. It starts with one user, who would record an audio clip of, say, a piano riff. When the recording gets uploaded, it's visualized as a spherical node. Another user can then visit the newly-created "tree" and listen to the recording by clicking on it. They can then choose to make their own contribution to the sound, and their recording is visualized as sphere that branches off the previous node. Clicking on any node in the tree will play the combination of recordings down the path from the root to the selected node. In the above picture, the root node is pink, and both that recording and the yellow one are playing simultaneously.
The app is based on the Web Audio API, meaning all the recordings can be made with a built-in mic on the user's laptop, without flash . Building a flash-free audio app was the main technical interest behind this project. The Web Audio API allowed us to do some other nifty things too, like draw out the waveforms on the play strip and make the frequency bars on the actual tree. For that visualization we performed a Fourier transform on the live audio and used the data to modify the WebGL objects with Three.js.
My role on the team was to implement the bulk of the audio functionality. This included getting live audio from the user's microphone (with the help of recorder.js), packaging and saving the resulting audio files, dynamically building the audio context 'graphs' for playback and analysis, and writing the functions to draw waveforms and drive the frequency bars. I also created the interface editing recorded sounds, which was one of my favorite parts of the project. When you make a recording, you have an opportunity to edit the start and end points of the song, and align it with the existing sounds. This all happened in the waveform view. Your fresh recording was drawn on top of the existing waveforms, and you could just click and drag it horizontally, lining the new waveform up with the ones behind it. This edited a piece of metadata associated with the recording that imposed a start delay on playback. You could also grab handles at the start and end of the track and drag them back and forth to trim the recording - these controls also attached metadata to the recording.
Unfortunately I can't show pictures or a video of these features because the project is no longer live (it requires facebook authentication, which expired, and the project was eventually taken offline).