Why AI is Forcing Music Fans to Confront Uncomfortable Truths

0
355
INK Music AI Article
Image: Stefan Brending / Wikimedia Commons

Just the other night, I had this bizarre nightmare. Instead of a terrifying shadow spirit, it was Spencer Charnas, frontman of the American metalcore music giants Ice Nine Kills (INK). Every time I flicked off the light, there he was, leering with that signature theatrical menace. It was both hilarious and terrifying. The truly wild part? Upon waking, the man and his band unexpectedly found themselves in hot water, almost like a premonition. No exaggeration, no joke—it was genuinely weird.

Weirder still is the controversy itself. The band was trapped in the internet’s glare over their recent use of artificial intelligence (AI) for promotional material and merchandise. Initially, this brief foray into AI sparked concerns that are increasingly common and, frankly, justified. In an industry where the commercialisation of art is paramount, the growing favour shown towards AI, often under the guise of ‘improving creativity’ raises valid questions about authenticity and artistic integrity. For many, this could have been a fleeting moment of internet outrage, perhaps resolved with a quiet retraction or a sincere statement acknowledging fan sentiment.

However, the situation took a sharp turn when Charnas issued an incredibly sarcastic response, alienating not just detractors but also loyal followers. This response transformed a contained ‘glare’ into a caustic laser beam of scrutiny, primarily fueling Reddit threads that questioned the band’s integrity. The backlash escalated to the point where Reddit moderators themselves stated “If this really is going to be their official stance, then we will not be allowing any INK in here going forward. New music, tour announcements, etc.” 

This incident, while specific to Ice Nine Kills, highlights how AI’s rapidly expanding presence in music is forcing fans to confront deeply unsettling questions, not just about the integrity of the art they consume and the values of their favourite artists, but crucially, about the very legality and ownership of the sounds they create and disseminate. When AI is used for songwriting, vocal generation, or even marketing, it fundamentally alters the unspoken contract between creator and audience; the expectation of authentic human creation, genuine artistic intent, relatable personal connection, and ethical industry practices. This raises critical questions about who profits from art and what we’re listening to. 

AI as a whole does not need to be inherently negative. But the deployment of AI in core creative aspects, where the majority of the work is required to be heavily represented by the artists themselves, risks cheapening the art and the artists’ perceived efforts. As a fan, you don’t just connect to a sound or the music itself. You want to connect to the human behind it. It’s far more compelling to hear how a musician found inspiration for the punch of a snare drum than to learn the prompts they typed into an AI bot to generate a phrase-by-phrase recreation of something they could have produced themselves. We’re already seeing the rise of more AI tools: BandLab’s SongStarter generates unique beats and melodies, while Atomic Songwriter, a suite of AI tools, aims to “inspire songwriters” and “break through creative blocks” by essentially writing lyrics for you. But isn’t the very process of making music supposed to be the whole point of becoming a musician?

Consider the uproar earlier this year when rumours circulated that Playboi Carti allegedly used AI to recreate his voice on his recently released record, MUSIC. His fans set social media ablaze when they realised the music might not have been made, let alone sung, by him. Whatever one makes of it, using AI is a pretty stark allegation for an artist whose primary instrument is his voice—the very thing that kickstarted a new genre of rap and, arguably, a legion of devoted fans. With so much of rap music’s history grounded in the authenticity and honesty of individual MCs, and with Carti taking so long to supposedly work on this new record, any suggestion of using AI to take a shortcut was bound to cause an uproar. 

The ever-present threat of copyright infringement adds to this tumultuous situation. It’s no secret that the last thing any musician wants to hear is accusations they are ripping off somebody else. With AI, it can happen to anyone, accidentally or otherwise. We’ve collectively spent countless hours watching news feeds for updates on high-profile legal battles; from Ed Sheeran’s protracted struggles over ‘Thinking Out Loud’ to Dua Lipa’s defence against claims that the chorus of ‘Levitating’ was borrowed from a track by Florida reggae band Artikal Sound System, and Lana Del Rey’s dispute with Radiohead’s publishers over ‘Get Free.’ 

These proceedings are, without exception, as mentally draining as they are financially ruinous for everyone involved, be it the accused or the accusers. Now, imagine this already complex and costly landscape saturated with AI-generated sounds that may unknowingly or subconsciously draw from copyrighted material. For instance, the multibillion-dollar AI firm Anthropic was recently sued by Universal Music Publishing Group (UMPG), the publishing arm of UMG, for allegedly engaging in “systematic and widespread infringement of their copyrighted song lyrics” through its chatbot, Claude. Large datasets of pre-existing music, lyrics, and vocal styles are used to train AI models, frequently without explicit authorisation and licensing.

This means an AI-assisted song, even if created with innocent intent, could inadvertently trigger a lawsuit, placing undue burden on artists and muddying the waters of artistic ownership. As Mitch Glazier, CEO of the music industry trade group the Recording Industry Association of America (RIAA), succinctly put it, these lawsuits “document shameless copying of troves of recordings in order to flood the market with cheap imitations and drain away listens and income from real human artists and songwriters.” The very notion of independent creation, already a battleground in human-to-human cases, becomes almost impossible to prove or disprove when algorithms are involved. So when artists use AI, they are arguably complicit in the struggles of their fellow creators.

These cases raise crucial questions, not just about the final product, but about commitment. If an artist barely puts any work or effort into the making of music, be it for their fans or themselves, who is to say they won’t put in that same effort into merch designs or even its promotion? Who is to say that the next time you see them live, the production won’t have been severely downgraded, or that you’ll have to wait hours for them to turn up, or worse, that they’ll cancel the show at the last minute? In an age where AI generally makes it easier for everyone, including musicians, to cut corners, the question will soon slowly morph from ‘will AI be used?’ to ‘why, how and under what ethical guidelines?’ However that question is answered, musicians will need to find a careful balance between convenience and authenticity if they don’t want to risk alienating their fan base.

Words by Mishael Lee


Support The Indiependent

We’re trying to raise £200 a month to help cover our operational costs. This includes our ‘Writer of the Month’ awards, where we recognise the amazing work produced by our contributor team. If you’ve enjoyed reading our site, we’d really appreciate it if you could donate to The Indiependent. Whether you can give £1 or £10, you’d be making a huge difference to our small team.

LEAVE A REPLY

Please enter your comment!
Please enter your name here