[Lesson 12] Music and AI - rociorey/cci-2020 GitHub Wiki
Extracts from lesson 12 chat:
Student: I was wondering do you know of any generative ai/machine learning programs that can create potential sequencer patterns from listening to a song or something similar?
Rebecca Fiebrink:
I don't know of programs that will do this out of the box. The main reason for this is probably that this requires 2 separate steps -- a sequencer pattern generator would generally be working with a symbolic music representation (i.e., a sequence of of note/sample choices over time). But if you want to learn possible sequences from a song, then you've got to start working with the raw audio representation. So you have to bridge this gap from audio to symbolic somehow
In the very simplest case, you could use a music transcription algorithm to try to derive a symbolic representation (usually MIDI or something similar) from the audio of a recorded song. This could work pretty well for some circumstances (e.g., common combinations of acoustic instruments, or monophonic tracks that just contain a melody or drum track etc. that you want to reproduce in your sequencer). But it can get pretty complicated if you want to to throw arbitrary audio into it and then get a representation suitable for giving to a music generation algorithm.