Project outline - miguelpaz/spmproject GitHub Wiki

Not sure how we should break down the workflow, so let's figure it out.

Idea: Start from aggregate motion, iterate for multiple streams

I think we could use kinect/leapmotion (supported on both processing and ofx) for aggregate motion detection in a prototyping phase to work on sound generation. That may be easier than trying to collate many, many data streams into a sensical composition at first.

We may also find that it ultimately more pleasant to generate some baseline music from aggregate motion and then use individual motion (say outliers) to trigger specific sounds.

Idea: Generated music through genetic algorithms

My basic first Idea (and we'll see how viable it is) was to create music using an algorithm created by genetic programming. The movement of people is used to score the sound they hear and the score informs the viability of the algorithm presented - this way we will slowly shape the music that is played through an evolutionary process and we evolve dance music.