AI and music. What are the ramifications, and is it all “doom and gloom?” During my lifetime, the music industry has changed enormously. The way in which people compose, record, and consume music today is fundamentally different to the way they did in 1990, when I was born.
Back then, revenue was generated by record sales in both the singles and albums charts, and consumers would have to physically walk in to a record store and purchase a copy of the latest single or album, either in vinyl record or CD format. These were the only two options available. Artists were usually trusted to write their own material as beneficiaries of the previous generation – the 1960s – that had established the concept of the singer-songwriter. It was demonstrated that this approach could be just as lucrative, if not more, for record companies than music written and composed by classically-trained studio songwriters. In the studio, despite incredible leaps forward in recording technology and techniques throughout the 60s, 70s and 80s, music production still required collaboration between the artist and producer. They’d be supported by multiple, competent sound engineers, who mixed their recordings with expensive, oversized equipment. There had been some forays into digital music with the use of synthesisers and drum machines in the emergent electronic music genre, and the beginnings of sampling by early hip hop artists. These were only small hints of what was to come. Most of an artist’s revenue was made from record sales, supplemented by touring. If they were lucky enough to have a mainstream following, they could make money from TV appearances, press junkets, and merchandising. This is more or less how the music industry of 30 years ago operated. It’s a far cry from where we find ourselves today.
How did the industry change into what it is today? Someone can upload a recording that they made on their smartphone in five minutes, and Spotify will distribute it across the world to anyone willing to listen at the click of a button. The elephant in the room is the rapid technological revolution that was set in motion in 1990, the year that also witnessed the birth of the World Wide Web. On its tail came advancements in music technologies that happened so fast that the world from which it came is virtually alien to us today. Many people I’ve spoken to often point to Autotune, a prominent feature in Cher’s 1998 single, “Believe”. Underneath the surface, listening devices and recording equipment (not to mention instruments, themselves) were becoming smaller, more affordable, and more accessible to millions of people across the world. The way in which people started to listen to music was a revolution in and of itself, first with downloads and then with streaming.
A technology so advanced that it has the potential to make an artist obsolete, that could do away with music created by humans, that could completely undo what for millennia we, as people, perceive music to be at its core: feeling expressed through sound. I am, of course, talking about artificial intelligence. The fears of present-day musicians, myself included, are mostly directed to one specific application of AI in music: composition. The thought of machines writing music is very much the stuff of nightmares. The acclaimed music YouTuber Rick Beato recently used Chat GPT to write lyrics in the style of Ed Sheeran. The software spat out the lyrics in seconds, based on an analysis of Sheeran’s lyrical output. Although it was completely correct in choosing words typical of his style, the arrangement culminated in a cheesy, bland composition. It lacked any true meaning and while stylistically identical, it was an ersatz creation of an Ed Sheeran-type song. Beato was noticeably dismayed by this.
I’m aware that this software will only perform the task it is directed to do, so it would be interesting to see what results it would yield if the programmer could be more specific. My concern is that whatever a piece of AI software can do in terms of composing music, the result will always be an imitation of human music. AI does not have the capacity to understand why human beings compose, nor can it understand the emotions people experience when they listen to or perform music. Aside from composition, there is a gargantuan threat to artists posed by the emergence of deep-fakes. AI has already demonstrated that it can create perfect imitations of an artists’ voice (as recently came to light with a deep-fake of Drake’s song “Heart On My Sleeve.”) There are claims that the artificial song was preferable to Drake’s own output. Technology like this could have drastic effects on an artist’s career in the near future, but also raises questions about legality, when it comes to the ownership of artists’ voices and their ability to monetise content. My biggest fear is that, eventually, record companies may attempt to the cut artists out of the industry entirely, when all they need is a machine to create content for them.
Spreading prophetic doom is not my intention, here. Just airing some legitimate concerns. I’m also aware that there are positive sides to the use of AI in music, as it is merely the application of it that creates a distinction. AI does offer some unique opportunities in music production. Recently, Sir Paul McCartney made headlines through his use of AI in producing, what has been referred to as, the “final” Beatles song. It is suspected that the song in question is an unreleased demo cassette recorded at home by John Lennon in 1978 called “Now and Then.” McCartney used AI technology to extract Lennon’s vocals from the recording, separating them from the piano section so that it could be mixed. This is cutting-edge technology, and the possibilities for “musical archaeology” – as I define it – are very exciting. If different parts of a recording can be isolated in such a way, the application of AI in teaching musical concepts could potentially be at our fingertips.
We have to accept that AI is here to stay, and that its presence within the music industry will have a revolutionary impact, both positive and negative. I urge caution for the future, that we can proactively protect what is fundamentally human about music, and do our utmost to ensure that technology is used to advance that humanity, not to overtake it. How can this be achieved? I know not.