Emily 2.0: Possible applications for AI in music

The Singularity is already here as some believe, or is otherwise insidiously affecting our lives in sneaky ways.

Can you imagine that music created with AI can sound or have the visceral impact of something like this?



Making music has become more and more electronic, or reliant on electronics to produce it. In this sense, the Singularity in music might have happened long ago. Use of AI in music composition has been around since mid-20th century--hardly new and hardly cutting-edge. However interesting it might be to imagine AI informing our musical creativity, the sense of 'soul' cannot be artificial or contrived.

To our very smart ears, music made with AI is not difficult to spot when its intent is to be a proxy for the real thing. Its creepiness can almost be compelling when you sense that it is trying to respond to the Muse, even if it is a lifeless algorithm. Somehow, one can see a humanness in it that our  mirror neurons immediately respond to.

When I first heard Emily play this I thought it was interesting, and when I reverse-engineered it, I understood the set of rules that made it.



Music made from algorithms is an interesting way to roll the dice for an idea, but usually doesn’t have a long shelf life as true musical expression or as memorable compositions. Guitarists on YouTube showing off their skill with arpeggios are decidedly more interesting than Emily running them off an algorithm.

It is interesting to reprocess algorithmically conceived music to the human level and perform it on real instruments to see how they work in a different medium. This was done in the past with Christopher Riley's solo piano interpretations of Radiohead songs and Bang On A Can orchestrations of Brian Eno's Music For Airports and Burning Airlines Give You So Much More, which took on a new life as 'serious' composition, the latter of which, sadly, bears no resemblance to the original in style or mood. (Perhaps AI could be used to a create a wireframe version that composers then expand upon.)







It's the interpretation and translation of the more 'pixelated' information contained with a Radiohead recording and the analog version pounded out on an acoustic instrument. If AI can do this, AI in music might have a bright future, but algorithms aren't usually that surprising in terms of the variety they can produce.

Personally I would prefer using a feedback loop system where electronic and analog versions get mashed up and output to both digital (robot) and analog (human) iterations. That would clearly keep both humans and machines learning from each other. (Like ReCAPTCHA for audio) The machines will then learn and process through listening, letting humans take the controls.

Possible applications of AI in music:

  • Smart or 'predictive' effects that learn your voice, songwriting or compositional styles and apply effects accordingly. Existing templates can be applied to shape the contour of the music based on artist or composer style;
  • Smart effects that suggest harmony and counterpoint;
  • Applications that compose for films based on a gigantic database of cinematic moods;
  • The musical equivalent of grammar and spell checkers to suggest changes to chord harmonizations, tempos, and so on. (Auto-tune for the entire song or composition);
  • 'Predictive coding' for music that uses larger chunks of audio information to form entire compositions based on constellations of small samples
  • The AI equivalent of Sample and Hold where random musical segments are looped

Popular Posts