Menu
Close

Understanding Generative Music - The Tech Behind Virtual DJs

Dec 6, 2021

For millennia, music has been a means of expression for humanity. And making it has been an innately human pursuit, a product of creativity and collaboration made to be listened to and appreciated by all of us. This has taken us through different music eras, from analog to electronic and now digital, each unleashing new genres and ways of consuming content.

There's certainly no shortage of creativity. The world's largest streaming music platforms feature millions of songs. Spotify alone sees over 60.000 tracks being uploaded everyday, which equals to roughly 22 million tracks per year. Youtube music claims to have two billion monthly users and Apple Music counts over 60 million paid subscribers.

To help sense of it all, platforms rely on advanced music curation tools to help sort through the endless options. If you've ever tried putting together the right playlist for jogging or studying, for example, chances are that artificial intelligence lent a helping hand in the process. AI-driven algorithms and big data have become critical in automating music selection, delivering recommendations and helping you find your next favorite track. But is having a panoply of readily available options resulting in better listening experiences and fueling creativity? Most of us end up listening to the same tracks or genres over and over again, anyways.This has made some argue that the model is broken. But perhaps it's just incomplete.

We've been listening to music all along. But what if music listened to us?

Enter generative AI music services, a step up in the technology ladder - and beginning of a post-digital era in music-making. Generative music blows open a new era of opportunities where melodies are unpredictable, adaptive, unique and impossible to ever be repeated. This means that listening to music will no longer be a linear experience. It will transcend concerts, albums, tracks and time-limited interactions.

AI-powered music is allowing music to better adapt to the preference of users as much as it lets them step into the creative process to co-exist and co-produce.

AI’s Got Talent

Machines making music isn't exactly a novelty.

The relationship between music and computers stretches as far back as the early 1950's, when British computer scientist Alan Turing managed to produce the first computer-generated melodies. In 1957, Illiac Suite for String Quartet became the first composition entirely generated by artificial intelligence. Technology hasn't stopped evolving since, including machine-learning, giving birth to increasingly more complex musical intelligence.

AI is now being used to create otherwise unimageable new melodies, supporting automation of music composition and, believe it or not, even taking on the role of lyric creation. That's right - artificial intelligence is perfectly capable of creating, performing and even singing for you.

Want to listen to an album fully produced by AI? No problem, Auxuman has plenty of music. Maybe you're into alternative musical experiences, like a concert delivered by a virtual AI DJ. Sensorium Galaxy has a complete lineup of virtual artists (more on that later). What if you're in the mood for composing? AIVA is a great tool if you're looking to experiment with classical or symphonic music production.

In fact, AI has become so talented at making music that it has inspired the creation of AI Song Contest, a Eurovision-style contest showcasing the best musical productions created by humans with the aid of AI.

But the industry is hardly at risk of becoming a one-machine show sort of thing. AI is still a long way from producing replacements to real-life artists (if ever), becoming instead an invaluable co-creation tool, letting creators explore new artistic outlets.

The sound of (generative) music

AI music is capable of producing a flow of real-time original melodies that are continuously adapting and evolving to better suit the ecosystem in which they're being used, based on parameters such as purpose, mood and genre.

Mubert is one such platform that specializes in empowering users, professional or amateur, to create original and customized music with the help of artificial intelligence. To make it happen, Mubert relies on a database with millions of samples based on sounds, keywords, and data points that are used to train algorithms and produce infinite streams of customized original sounds.

"We let users control parameters of the music that's being generated, either in real time or within set, pre-defined changes. For example, through the Mubert API (application programming interface) you would be able to generate a track that starts in one mood and transitions to another mood around the specific mark that you would like. Or produce a mix of tracks of a specific length that would match your criteria. And further down the road, it would be great to allow developers to make this interaction between our API and other tools two-sided where, for example, some music creation tools could send samples into Mubert - and Mubert could arrange them and send the audio back into the music editing app. These are some of the ways we see the Mubert API developing in the future", explains Marko Nykoliuk, Mubert's Chief Product Officer.

Through a combination of AI and human input, from producers, creators and regular users, Mubert's database is constantly evolving and expanding. Feedback is collected based on the interactions with users and external factors like time of day or weather, facilitating the creation of new music. You can mix and match, pick references of what you're looking for and generate a track or a stream choosing from different mastering presets to get all-original sounds in one track - all sound engineered with AI.

These sound tracks are exclusively composed for each individual user, personalizing the relationship between listeners and their musical productions.

“Our focus as a company right now is to build the best possible platform between artists and content creators. We would like anyone who's creating any kind of content, either videos on YouTube or podcasts or social media content, or even a game or a metaverse, we would like them to be able to come to Mubert and find the exact kind of music that they are looking for. The music that has the right mood, the right genre, that is perfectly tailored to the length and the dynamics of the content. On the other side of the platform, we want to give musicians a way to turn their raw ideas into compositions the creators will be able to use. The challenge is that typically when a musician is creating some music on demand for a content creator, they will get suggestions, they will be asked to re-make or re-do some little bits of it and we would like to simplify things for musicians so that they can bring raw ideas to our platform and then content creators can, by themselves, just tell Mubert how they would like those to be arranged. And of course, the platform also learns, through a lot of machine learning and algorithms, how to arrange those in the best possible way”, adds Nykoliuk.

Moreover, an API such as Mubert's opens a world of opportunities for creators by democratizing sound-making, allowing new players to enter the market and live out the potential of AI-based artistic tools. Lowering the barrier of entry to music can only be seen as a positive and progressive step toward more diversity.

Generative music services have a great advantage over pre-recorded outlets, like an album or playlist, because of their ability to tap into an endless catalog of beats and their capacity to generate music across a vast spectrum of melodies, rhythms, beats, etc.

The CEO of Mubert, Alexey Kochetkov, believes that the technology levels the playing field for music production. "I think that the best way to use generative music is to help people who have no musical education or something else to create music. It will give them another dimension of their creativity and so they can create music, they can collaborate with each other, they can play music together or even share it or publish it in streaming services."

AI makes the process of making music easier and fairer, he suggests, complementing real-world experiences with a new realm of possibilities of what can be produced and where.

AI DJ, Take it Away

Metaverses are the perfect match for AI and generative music. As highly immersive and interactive virtual spaces, the opportunities for creation are limitless, be that in music, dancing, painting, sculpting or any other art form.

Sensorium Galaxy is a metaverse focused on providing high quality VR and AR entertainment experiences, without any scripts or predetermined outcomes, which lets creators take full control over their creative vision. A big draw of an environment with absolute freedom are the possibilities that come from interacting with other users and intelligent beings (avatars), but also from engaging with a brand new breed of artists - AI DJs.

Sensorium's collaboration with Mubert has resulted in the release of the world’s first AI-powered DJs, like JAI:N. the This is made possible through Mubert's API which enables these virtual artist to create real-time music, drawing inspiration from over 60 different genres, including hip-hop, EDM and K-pop.

More recently, Sensorium and Mubert teamed up to produce a set of brand new AI DJs, dubbed social AI DJs. The first to be released was Kàra Màr, who also became the first social AI artist to debut an album on Spotify. Mubert’s proprietary technology has allowed these virtual beings to generate a constant flow of ever-changing music, whereas Sensorium equipped them with a virtual body, intelligence and social skills.

kara mar sensorium galaxy

This unique combination lets virtual DJs have unscripted and thought-provoking conversations with fans, creating a high level of interactions when they’re not performing.

So it’s clear that we’re seeing the dawn of AI creativity; for the first time ever AI-powered artists will be performing autonomously in a metaverse, along with real-life DJs such as David Guetta, Armin van Buuren and Carl Cox, unleashing unparalleled concert experiences.

"If you imagine a metaverse, you think of it as something that's been co-created by hundreds, thousands or millions of people. Something where each person in that metaverse can bring something of their own. And if you think about the music in these metaverses it sounds ridiculous that music will be this static piece, this static mp3 audio that is the same for everyone. We think it shouldn't be like that and we think that generative music in this way - it takes what musicians are creating, takes these loops, these samples and uses them as building blocks to create something much deeper, to create music that has so many variations and they can be unique for every user in this metaverse", expanded Nykoliuk.

Unlike the physical world, metaverses have potentially infinite data points from which algorithms can learn. This allows the 'musical DNA' of each user to be progressively enriched and become 100 percent adaptable.

The social mechanics integrated by Sensorium Galaxy also encourage users and AI beings to come together in the form of unique co-creations. Ultimately, this could lead to the rise of the world's first AI superstars and influencers, with their own fan bases and monetization opportunities.

The stated goal of Sensorium and Mubert is not, however, to replace real-life music experiences but to complement them with technological advancement.

A future that doesn't sound like the past

Technology has created new ways in which people can be creative by adding new physical and virtual dimensions. Thanks to artificial intelligence, never-before-imagined horizons are within reach and taking the potential of creativity to new heights where humans can dare to be even more expressive. From creation to consumption, there’s no turning back from the AI-powered music revolution.

As with any other technological advancement, there are still challenges overcome. Fine-tuning music engines and algorithms to better adapt to virtual audiences is one of them.

“In the real world, the DJ plays something and the audience reacts. It's a very natural way of interacting through music. We have done a lot of research and development in this context. I think that virtual audiences, like VR users, should experience music and DJs sets like they would the real ones. This aspect about the feedback between the audience and virtual DJs is very important. For me, it's a very hard but an interesting challenge”, concludes Paul Zgordan, Chief Content Officer of Mubert.

The virtual DJs released by Sensorium Galaxy in cooperation with Mubert are a step forward in giving users an aiding tool to be used in the creative process where humans are still in the driver’s seat as they reach new creative heights.

Rachel Breia
Rachel Breia
Senior Content Manager

Stay up to date

*By subscribing to our newsletter, you agree to receive marketing emails from Sensorium.