Earlier this year, a curious event took place.
At the famed Glastonbury music festival, concertgoers were presented with an unexpected duet between Sir Paul McCartney and John Lennon of the track 'I've Got A Feeling' as part of a stunning three-hour set that also honored the legacy of the Beatles. But hold up, you might be thinking. Was it the same and legendary John Lennon that was so shockingly assassinated in 1980? How could he possibly be taking to the stage - or any stage for that matter - of an English festival 42 years on?
Well, we have artificial intelligence and Peter Jackson, yes, the same director behind the Lord of the Rings trilogy, to thank for this beyond the grave apparition.
Jackson has lifted the veil on the technology involved to put together the performance, explaining that his team had developed a machine learning system that was taught what guitar, bass and singing sounds. More specifically, this custom-made AI was trained to learn how to sing like Sir Paul McCartney and John Lennon, thus being able to recreate virtual presences that are as realistic as possible.
AI music is bringing other legendary singers back to life.
As part of the “Lost Tapes of the 27 Club '', an initiative led by Canada-based mental health charity Over the Bridge, a collective of performers who’ve died at the age of 27 “released” novel tracks made entirely by Google’s AI program Mangenta.
Amy Winehouse, Kurt Cobain, and Jimi Hendrix are some of the artists covered by the project, with the lyrics and recorded music being entirely authored by AI.
How Is AI Music Created
And the process of creating this new type of AI music is seemingly straightforward; all users have to do is feed a singer’s existing music into a bot that relies on machine learning to detect patterns and produce new music back on the pre-existing catalog.
The same technology was used to create three lines of voiceover by the late celebrity chef Antony Bourdain for the Roadrunner: A Film About Anthony Bourdain documentary, directed by Morgan Neville.
Want to listen to a podcast interview between Joe Rogan and Steve Jobs? That’s possible, too. And it doesn’t matter that they’ve never met or that the Apple founder has been dead for over a decade.
Elsewhere, artificial intelligence is also powering audio deepfakes, also known as voice cloning or synthetic voicing, whereby AI models are fed with training data. Typically, this information includes original recording and voice samples from a target person, who’s speaking or singing, for example. Based on this data set, AI is able to render an authentic sounding track that can be used to “speak” anything that is typed or said. This is known as text-to-speech or speech-to-speech.
Artificial intelligence technology has advanced to the point that it can replicate a human voice with an astonishing high level of accuracy.
Does this sound a little sci-fi-ish? It’s understandable, but perhaps you’ll change your mind after watching the now-infamous This is not Morgan Freeman video.
For some of us, however, this is old news as many will have come across deeptomcruise, a wildly popular TikTok account filled with plenty of Tom-Cruise-like content created entirely by AI. The movie star has no association with it, despite the unsuspecting viewer probably being none the wiser.
What AI Tools Are Being Used To Make Music
Applying language processing and speech recognition in entertainment and music hasn’t been without controversy, with many raising eyebrows and highlighting ethical concerns.
Many detractors even question whether AI music can be considered a form of art and if it will ever be put side by side with the world's greatest masterpieces.
No matter where you stand on the debate, there’s no doubt that technology is aiding the creative process of many artists.
Currently, AI tools can seamlessly create music entirely from scratch, including original lyrics, instrumentation and music composition.
In fact, so-called songwriting AI companions like Jarvis and Jukebox are an increasingly popular resource for many aspiring musicians who generally lack access to more complex (read expensive) music creation tools.
Developed by OpenAI, Jukebox has become a household name, providing artists, lyrics and genres to generate original music samples from scratch. Some of the music styles considered by its AI neural network include a close approximation to renowned artists such as Celine Dion, Kanye West, and Tupac.
And while this sounds quite futuristic, technological tools of the sort have been around for quite a while. David Bowie, for example, helped create in the 1990’s a lyric-writing software called Verbasizer that worked as a sentence randomizer, aiding in the creation of lyrics. The more recent potential of an AI music tool and a new type of AI-based song making hasn’t gone unnoticed in the music industry, by record companies and, of course, music streaming services.
No, I am not referring to the AI-powered algorithm that helps you find the next favorite music banger and curate Spotify playlists.
How about listening to music streams created entirely by AI and that can perfectly adapt to your mood? That’s the premise behind AI generative music streaming offerings such as Mubert.
You might be surprised, or at least intrigued, at the idea of shuffling through melodies that are unpredictable, adaptive, unique and impossible to ever be repeated. But that’s exactly what you get with generative AI.
AI-powered music leverages deep learning algorithms, neural networks s and other artificial intelligence tools to allow music to better adapt to the preference of users as much as it also lets them step into the creative process to co-exist and co-produce, so you don’t have to feel let out or that you’re just a mere consumer.
For users there’s also a major upside of resorting to a generative source as they don’t have to worry about headache-inducing problems such as copyright and licensing when trying to use something as a simple background music for, say, a YouTube vlog or content for social media platforms.
This is an approach now also being favored by the film industry where scouring through music libraries to find music can prove to be both time consuming and a legal, and financial hurdle.
That led composers Drew Silverstein, Sam Estes, and Michael Hobe, known for working on music for big-budget movies like The Dark Knight and Inception, to launch an AI-powered music platform - Amper Music.
The AI music company provides extensive original music creation tools for creators, video and podcast producers, video game designers, and more.
Again, this goes to show how AI and advancements in computer science are democratizing access and distribution well beyond the platforms currently available.
Generative Music And the Beginnings of AI DJs
The technology has become so sophisticated and talented that it inspired the creation of the AI Song Contest, a Eurovision-style contest showcasing the best musical productions created by humans with the aid of AI.
And while all of this already seems lofty enough, the promise held by this musical revolution is reaching yet a new high with the emergence of AI DJs. That’s right, your next favorite performer might be a virtual being that also happens to be a gifted musician.
As we’ve seen before, AI music bots can compose and churn out entire creations of their own based on the information they’re being fed.
However, artificial intelligence virtual artists take it a step further, by not only delivering original music but doing it so with the sense of presence and interaction as a human performer.
Or at least that is the premise of the first breed of such musicians. Kàra Màr, an AI DJ developed by Sensorium, has made history after becoming the first of the kind to release an entire album on Spotify, titled “Anthropic Principle”.
They’re part of a larger lineup of virtual AI musicians, including Natisa Sitar and Ninalis, that are currently performing 24/7 as part of Sensorium Galaxy’s metaverse streaming service.
Unlike the generic version of an avatar, like the ones we’ve all come across online, Sensorium AI DJs are equipped with artificial intelligence technology that allows them to both deliver mind-blowing concerts but also interact with fans in a compelling way.
Each virtual being is fitted with a unique personality and background, as well as long-term memory. In other words, they will remember you and whatever past interactions you had. And as a bonus, they’re always available for a chat.
Now that we’ve gotten this far, one of the most frequently uttered questions is whether artificial intelligence will - or is bound to - replace human artists.
Having automated systems that can imitate and remix human expression might not be enough to simply overtake the power of a live performance and the sheer magic of human creativity and artistry.
And we're still a long way from seeing artificial intelligence and music delivering hit songs and winning Grammys, no matter how advanced the technology seems to be today.
On the other hand, the use of the technology is already paving the way for profound changes in the music industry.
It's been challenging the way licensed music is dealt with, music generators are re-imagining lyric creation and AI DJs are introducing the world to an entirely new music genre. Perhaps, then, it’s fair to say that AI music is a valuable tool for collaboration more than a Terminator-like technology that will erase and replace our favorite artists.