I'm making slow but steady process with the camera simulation. The blurring effect I mentioned at the end of the first post on this is now in there using the low-pass filter code that I posted yesterday. I'm really happy with how that panned out so I thought I'd post another video and the code. I've used a bit of off screen drawing to do this which I don't recall being covered in the Processing documentation so I'll write that up for another post.
I still need to give some thought to how I'm going to make it react to sound. There are a lot of available parameters in an audio stream (amplitude, envelope, frequency content etc) and it's important to pick the right ones and process them correctly so that it's apparent to the observer that the musical events are driving the video but with the visuals still being pleasing in their own right.
In trying to further humanise my wobbly camera project I needed some of the parameters to change in a way that was linked but with one of the parameters responding at a slower rate. To do this, you need a low pass filter which will be familiar to anyone that's spent any time making electronic music (anyone else should follow the link). There are a few ways to implement such a filter but for my purposes a quick moving average version was good enough. This works by buffering it's inputs into a FIFO and output the mean value of the contents of the buffer (I wouldn't do a good job of explaining the maths behind this so I'm not even going to try). The degree of filtering can be changed by increasing or decreasing the length of the FIFO queue. Code for the filter class and a little example are below.
I was watching videos of neurons growing this morning (it's nearly work at least) and it occurred to me that it might be fun to get Processing to draw something similar. I had a whole plan worked out with branching, and making the strands repel one another, maybe a bit of L-system action. Not unusually when experimenting with Processing, I did a little bit of the plan, saw something pretty and got side-tracked, in this case by the trails of particles with brownian motion.
After watching the particles swimming around for a bit I remembered that a while ago I'd been thinking about simulating 'human' viewport movement as seen in a thousand and one mobile phone videos. Seen as I had the particles already they seemed like a good candidate to add the viewport stuff to. The result so far is below. I'm pretty happy with it so far, I'd really like to try and add some kind of simulated blur and make it reactive to sound and then I'll probably post the code.
I love Processing in a way that's almost unseemly for a gentleman and a programming language but there's something about Java that feels, how can I put this, a bit inelegant and crufty. Lots of braces, lots of type declaration and so on. That feeling put me on a search for a way to do cross-platform graphics in a tidier language. The options that I've come up with so far are:
scala -cp .:[path_to_processing]/lib/core.jar Test
Hopefully you should see a window with a black background containing a white rectangle.
I have no idea whether I'm going to like Scala or not (to be honest Python is looking more promising at the moment) but it'll be interesting to see whether it's an improvement on straight Java for the kind of code I'm writing.
MidiOSC, the MIDI to OSC bridge that I've been beavering away on for the past couple of months, is now available for download on GitHub. I wrote it in response to the messed up state of MIDI support in Java on OSX. I do a lot of work with Processing and external hardware that talks MIDI so having things break at random when the OS was upgraded wasn't a good thing for me. Now I can just fire up MidiOSC and spit data in a modern format into any language with an OSC library (C, Java, Ruby, Python, Scala...). My sequencer for the Novation Launchpad and, likely, a lot of things I do in the future are only going to talk OSC so I thought I'd better make it available sooner rather than later. More on the sequencer soon...
I was digging through some of my old Processing sketches to find something to generate the background for this page when I stumbled across an experiment in particle systems that I'd been working on. The sketch generates a crude approximation of a gravitational field and then drops a load of particles on that change velocity according to their location. As you can see in the video this can lead to some nicely chaotic behaviour.
If you're anything like me, when you've got a source of chaotic data your first reaction is to hook it up to a synthesiser so I did exactly that. The sketch sends out an OSC message for every particle once per frame with a note number for the particle and a the distance from that particle to each of the gravity hills (hills produced more interesting behaviour than wells as it turned out). SuperCollider picks up the messages and generates four sound grains with different timbres (sine, pulse and saw) which have an amplitude that increases depending on the distance from the particle to the associated hill.
The results have a lot of movement to them which is what I was going for. You could of course do all sorts of things with the data stream, the sounds I chose were quick to code rather than especially pleasing.
Alright then... My new years resolution was to share/shamelessly self-promote/shout into the void more often so with a view to doing that I've completely revamped my corner of the internet. The focus of things initially will be what I'm working on in SuperCollider and Processing but who knows where things will go. Wish me luck. And now on with the show...