AI As An Effect
An album of music created with AI is no more interesting than an album that uses a Wah pedal on every track, or used the Simmons synth drum kit circa 1984:
Another application for AI is musical paraphrasing based on isomelodies and isorhythms. ("iso" means "same", so "same melody" or "same rhythm"). This is something AI could do very well: use datasets of pitch series or classes and map them over datasets of rhythms, grooves or beats.
An example where a sequence of 6 pitches are mapped across various rhythmic figures (marked with slurs):
What it sounds like for clarinet, bassoon, and a double-bass playing the six notes in whole notes:
Here is a diagram for how AI could be used as a real-time effect in music, primarily in jazz or in music that is less through-composed. Algorithms run in real-time in the background and the computerist (operator/instrumentalist) chooses a “snapshot” when a certain pattern arises that is interesting (as in the hexagonal interface in the Simmons). The main problem here is the human reaction time in the making of good choices that the band will improvise against is too slow and could make for an improvised train wreck. This is not to say that this doesn’t happen in improvised music already, so until one could do a musical Turing test, we won’t know if AI in music has any interesting applications other than another Stomp Box or odd drum machine.
Deepfakes are both creepy and very interesting, but I realize that they are the visual corollary of instruments that sound like other instruments via the playing of samples (which incidentally began to appear in the early 1980s as well). The Synclavier was one of the first, which gave the artist a means of blending or interpolating timbre in real-time. Deep fakes are essentially the same thing but done with faces and voices.
Deepfakes in music could be the next big thing, but its been-there-done-that in some ways. Art shouldn't be about cool effects--or is it?
Another application for AI is musical paraphrasing based on isomelodies and isorhythms. ("iso" means "same", so "same melody" or "same rhythm"). This is something AI could do very well: use datasets of pitch series or classes and map them over datasets of rhythms, grooves or beats.
An example where a sequence of 6 pitches are mapped across various rhythmic figures (marked with slurs):
What it sounds like for clarinet, bassoon, and a double-bass playing the six notes in whole notes:
Here is a diagram for how AI could be used as a real-time effect in music, primarily in jazz or in music that is less through-composed. Algorithms run in real-time in the background and the computerist (operator/instrumentalist) chooses a “snapshot” when a certain pattern arises that is interesting (as in the hexagonal interface in the Simmons). The main problem here is the human reaction time in the making of good choices that the band will improvise against is too slow and could make for an improvised train wreck. This is not to say that this doesn’t happen in improvised music already, so until one could do a musical Turing test, we won’t know if AI in music has any interesting applications other than another Stomp Box or odd drum machine.
Deepfakes are both creepy and very interesting, but I realize that they are the visual corollary of instruments that sound like other instruments via the playing of samples (which incidentally began to appear in the early 1980s as well). The Synclavier was one of the first, which gave the artist a means of blending or interpolating timbre in real-time. Deep fakes are essentially the same thing but done with faces and voices.
Deepfakes in music could be the next big thing, but its been-there-done-that in some ways. Art shouldn't be about cool effects--or is it?
Comments