Artists Use AI to Compose Music

What does music composed by a robot sound like?

Digital music.

(Allies Interactive /

Where do musicians get their inspiration when they compose? Many draw from real life experiences. Others may get ideas from a muse. Today, some artists are turning to robots!

Holly Herndon is an American composer, musician, and sound artist who is now based in Berlin. She completed her PhD in composition from Stanford’s Center for Computer Research in Music and Acoustics.  In 2019, she founded The Fader. And if that wasn’t enough to impress, that same year she released her own studio album, PROTO.

While most musicians collaborate with other artists when creating their music. Holly took a different approach, working with a DIY machine learning software called Spawn. The software uses artificial neural networks (ANN) modeled after the structure of the human brain. These networks learn patterns from datasets during the training process. Based on this data, the networks create new material that includes Herndon’s own voice. 

When producing the album, Holly trained datasets to generate new music. The process requires input data of music written by people (or by artificial intelligence). The neural networks then produce variations of that music. 

“Computers surprise you in a way that an acoustic instrument doesn’t,” Holly said in an interview with Musicradar.  So what does music sound like when composed by what is essentially a robot? Truncated, choppy, digitized; it sounds like music from the future, that’s for sure!

But Holly Herndon isn’t the only one exploring AI in composition. Machines have played an increasingly important role in music over the last century. The godfather of computer science, Alan Turing, developed the first computer-generated music in 1951, according to  The Guardian

Then in 1980, David Cope from the University of California, Santa Cruz developed EMI (Experiments in Musical Intelligence), a system that analyzes existing music and produces new pieces based on them.

AI might not take over the job of “pop star” anytime soon, or will it? The digital avatar Miquela Sousa, a.k.a. Lil Miquela, is a computer generated artist with over 1 million followers on Instagram. “I’m a 19-year-old model and singer. And I’m a robot,’ Miquela said in an interview with CR Fashion Book. This then raises the question: Can we reproduce creativity using a computer?

Canadian musician Grimes (Who you may know of as the girlfriend of SpaceX CEO, Elon Musk, with whom she had a child named X Æ A-Xii) made a bold prediction on Sean Carroll’s Mindscape podcast:  “I feel like we’re in the end of art, human art. Once there’s actually AGI (Artificial General Intelligence), they’re gonna be so much better at making art than us,” said Grimes in the podcast.

So perhaps the next frontier of music lies somewhere in between. Many see the path forward with a new dawn of creativity that combines human ingenuity with AI. One thing is for sure, the next chapter of music will certainly take on unexpected directions as music and artificial intelligence become even more intertwined.  

11 Happy Songs to Instantly Lift Your Mood!
The Ultimate Relaxing Music Playlist to Ease Your Mind