Most people would probably be surprised to know that the first attempt to create music with artificial intelligence, dates back almost 60 years ago to 1958.
Right from the very existence of computers, people were figuring out if they could make music with them. So the problem of creating a hit song with artificial intelligence is as old as computers themselves.
In this article, we’re going to take you through key inflection points of artificial intelligence music from 1958 to today.
It's based on a talk done at the Music Tech Summit by Stephen Phillips, who is a founder of Popgun which is one of the industry leaders in aiming to train AI to create music. As well as Bob Moczydlowsky who is Managing Director of Techstars Music, which is an accelerator program based in Los Angeles which is backed by Sony Music, Warner Music and Sonos.
You can watch the full video here...
1958 — The first artificial intelligence music attempt
The first artificial intelligence music dates back to 1958, which was created by Iannis Xenakis and is generally regarded as the first algorithmic piece ever written.
Xenakis used something called Markov Chains to create this, which is now a very old technology involving sequence prediction, and of course sequences are a big part of music.
Essentially how it worked is if you played a note, it would come up with note is most likely to come next.
To make these sequence predictions, Markov Chains need a whole lot of data. So Xenakis took a huge number of tracks and passed it through Markov Chains, and this is the result…
Let’s face it, it’s pretty shit. It doesn’t really sound like music, but it kinda does?
What’s missing is that there is no localised structure, repeating patterns or anything remotely melodic.
1989 — The first neural network music created by artificial intelligence
More than 30 years later in 1989 the first neural network is used to create music. Markov Chains even then were considered old school machine learning.
Neural Networks are a form of artificial intelligence. Similar to Markov Chains, neural networks learn from being fed data, however the way neural networks work is loosely modeled on how the human brain works.
The creator of the track below, Michael C. Mozer, trained the neural network AI on Bach music by feeding it lots of different Bach songs. Then based on that learning, this track is what the AI came up with.
The big breakthrough here was that there was actual structure. It’s actually sounded like music. Not exactly good music, but music at the very least.
The breakthrough was that there was actual structure to the music. Although this track doesn’t sound that great, it at least resembles music?
1994 — Accepted music rules are built into Markov Chains
Markov Chains are revisited in 1994 by artist, David Cope. There are accepted rules in music, for example you should play in key, as well as musical notation etc. These rules were built into Markov Chains so it sounds a lot better than it did before.
It’s clear right away that this track is a whole lot better, and actually does sound like music.
This is where the battle between supervised and unsupervised learning began. Supervised learning being where you have a set of rules for the AI to follow (like in this case), where as unsupervised learning there aren’t any set rules and the learning is purely based on the data inputs. Check out a great video here which explains the difference.
2002 — LSTM neural networks are used to create music
Doug Eck was one of the first to use LSTMs to create music, which is a form of neural network based around the concepts of long term and short term memory.
This was a breakthrough because for the first time there are longer repeating patterns. Nobody had been able to get music to repeat like this before with artificial intelligence.
Later Doug Eck went on to become head of the Magenta project at Google Brain, and is now considered one of the leaders in the music artificial intelligence field.
2016 — The neural network era begins
This next song “Daddy’s Car” you might have already heard. It was created by the CSL team at Sony, which is a research team that takes no direction from the rest of the company. They do research and development, and are a team of 30 people across the world and are largely PHDs.
They took a number LSTM networks and a bunch of rules-based AIs and trained them for different pieces of the song. They took those out from a compositional standpoint and then combined them, including lyrics, and combine them with human beings into this song, with the idea of making a Beatles song.
It’s important to note here that the song was composed by AI, but with humans performing the song. This is not an AI fully synthesising the track fully mastered.
This gets into a really interesting discussion, where do we draw the line? The AI wrote it, is that enough to be considered an ‘AI song’? Or does an AI have to sing the lyrics, play every instrument for it to be the first AI song?
Regardless it’s pretty clear that this track is an order of magnitude better than the others before it. This was a major turning point, and we saw a land rush of other companies starting to release their own AI music.
2017 — Enter ALICE
Popgun which Stephen Phillips is one of the founders have built an AI called Alice. Alice is totally ‘unsupervised.’ Its causal-convolutional net, taking small fragments and turning them into images.
The training files have no metadata descriptions or midi-data attached. Alice learns by listening to the training files, then listens to a human performer and contributes its own ideas.
This AI is creating new music - the same way a human would. Every Monday, Alice practices with jazz pianist, Sean Foran, who reports back to Popgun on the AI’s progress.
As Sean plays, Alice can understand what he’s playing, and plays along with him -- so this AI is not just composing music and saving it, but actually responding to musical ideas in real time.
As any instrumentalist knows, the best fun to be had with music is in playing with other people. What this technology promises, is that you won’t have to play on your own anymore -- even if there is no one else around.
Not only will this be a great creative tool for professional musicians, it also has the potential to be used for educational purposes. The fastest way for musicians to improve, is to play with others -- but in 2017, these ‘others’ don’t necessarily have to be human.
This is thanks to 60 years of human curiosity and innovation, and of course, the ever increasing demands on the music industry. Soon, musicians and songwriters will be able to jam at any time with musical AI, and we may just be on the verge of uncovering something totally new.
If you're curious about how our technology is changing the game for music festival promoters, request a product demo of our software platform here.