Will Artificial Intelligence Penetrate the Music Industry?

While algorithms have been used for a good while now to detect musical taste and make recommendations on streaming services, they are now beginning to be deployed to make music itself. Artificial intelligence has been monumental in transforming information technology, using algorithms to make cumbersome and time-consuming tasks more efficient. Once AI steps beyond its mandate and begins to create entertainment, rivalling human output, then things will become interesting.

Historically artists and composers have used randomised statistical methods to make music. David Bowie worked with Ty Roberts to create the Verbalizer – a program with which Bowie could input up to 25 sentences and groups of words. The Verbalizer would then aleatorically rearrange the words into combinations. Bowie used this tool to great effect on his Berlin Trilogy. As with many other realms of production, the level of human input in music could be on the decline. The implications for the music industry will be a significant kettle of fish.

The big tech companies appear to be herding around the concept of AI-generated music. Google created Magenta –  a project designed for artists to use to help them write songs; it’s tools are all open-source. Similarly, Sony’s Computer Science Labs built an AI system named Flow Machines which creates music after having learned several styles from a database of songs. It’s artistic director Benoit Carré composed a song in the style of the Beatles, titled ‘Daddy’s Car’ using algorithms. There are also accessible programs on the web such as Jukedeck and Amper Music which allows casual users/musicians to experiment, selecting genres and moods before leaving the computer to do the rest. The former company are “training deep neural networks to understand music composition at a granular level”.

In the Jukedeck program, I clicked on the electro-folk genre and selected the emotion ‘uplifting’. The result was a bland and synthetic jingle not dissimilar from a beat lingering in the backdrop of a chart-topping song. So then I navigated my way onto Amper Music. Their beta version takes on a different approach focussing on ‘functional music’; it has seven specialised tabs – social media, video game, songwriting, film, TV show, podcast or radio and app. I chose the songwriting feature but was once again given a flavourless instrumental beat that sounded fully within the capacity for a human to make. Judging from these two experiences, ‘functional music’ might give AI music the most legitimacy. It can and most likely will have a place in background music.

Advertisement

The aforementioned tune ‘Daddy’s Car’ sounded impressive on the other hand. While it resonated more like a psychedelic version of Jens Lekman than a Fab Four number, it possessed a futuristic aura to it. It does become clear though, by reading the YouTube caption, that the AI input was rather minimal as it’s noted that Carré “arranged and produced the song, and wrote the lyrics”.

Perhaps the most compelling and complex program I came across was AIVA – a system that fruitfully composes classical pieces. Its website notes that AIVA “has been learning the art of music composition by reading through a large collection of music partitions, written by the greatest composers (Mozart, Beethoven, Bach, …) to create a mathematical model representation of what music is”. AIVA achieves this using the deep learning algorithm which is an algorithm inspired by the neural networks of the brain and then the sheet music is formulated for real musicians to play. It became the first AI to obtain the international status of Composer and the technology’s compositions have featured in film soundtracks, games and adverts. While AIVA may not give Mozart a run for his money, the compositions are palpably more melodic and expansive in comparison with the electronic/generic soundbites from Jukedeck and Amper. Furthermore, the classical music which AIVA draws from is all copyright-expired rendering it a win-win situation for the founders.

Streaming services will adore AI music as they won’t have to dish out money to third party rights holders although legal ownership of AI music is an ambiguous topic. In order to truly get past any legal barricades, it needs to be established if the artists using the AI programs are the ‘composers’ of the music. Individuals or businesses can currently buy the tracks they have made on Jukedeck in the form of a royalty-free license to use them. Or for a larger sum, they can purchase the copyright to own the track outright. The idea of artificial intelligence composing deep, engaging songs with lyrics seems far-fetched at the moment. There will not be an artificially generated number one single anytime soon but AI could easily have a negative impact on the careers of composers of film/video game soundtracks in the near future. Pop artists may craft hits with it for the under layers of their music.

The quality of the existing AI music programs is mixed and the concept is still in its infancy. But there is little doubt that it will push on further towards prominence. Some may be apprehensive about the possibility of AI stripping music of its creative and emotional dimension. However, when the fundamentals of music theory, in terms of the Western notation system, are plugged into a computer, the result is not necessarily a dehumanised form of music but more so an automated, expeditious one. We use the same chord progressions for a wide array of tracks; a basic algorithm can be used to recognise these.

In his lecture at FutureFest, Paul Mason posited that “musical artificial intelligence will start by creating music we couldn’t think of but it might also create music that we can’t understand”. This rumination is a dystopian one and whether we becomes transhumans or inherit machine-like characteristics is a matter for far futurological debate. The music industry has had enough to worry about recently but the idea of AI obliterating human musical output is as dubious as it is frightening.


Featured Image Source