You’re listening to music on Spotify. You’re in a 1970s mood and come across a playlist titled Classic Rock Anthems 60s & 70s. You hit play. One iconic track follows another: The Rolling Stones, Pink Floyd, The Beatles, Bob Dylan. Then suddenly, between these giants of music, a new song appears. You’ve never heard it before, but it sounds like a cross between Neil Young and the Eagles. A fusion of psychedelic ’70s textures, cinematic alt-pop and analog soul. That’s the sound of The Velvet Sundown. More than one hundred thousand monthly listeners, and their most streamed songs have racked up millions of plays.
There’s just one detail: The Velvet Sundown don’t exist. Every track, just like the band’s photos, was generated by artificial intelligence.
Imagine you’re in the mood for some good jazz. You start a playlist hoping that sooner or later you’ll hear Louis Armstrong or John Coltrane. Maybe something more introspective like Bill Evans or Miles Davis. Instead, you end up listening to Hara Noda.
But who is Hara Noda?
What’s striking is that this artist’s tracks also have millions of streams. A surprising fact, especially when you discover that Hara Noda is another “ghost musician”. Is it really possible that an entirely AI-generated artist could reach such a large audience simply by appearing in a background jazz playlist?
Hara Noda appears to be a real person working in Sweden, the same country where Spotify is headquartered. Coincidence? Perhaps not. The number of fake artists whose music originates in Sweden is remarkable. So what exactly is going on?
The root of the problem lies in the increasingly passive nature of music consumption.
People often ask Alexa or other digital assistants to play background music for a specific activity: studying, working out, doing housework, relaxing. Others simply rely on curated playlists designed for those purposes. In both cases, listeners rarely pay attention to the artists or song titles. And that creates opportunities for abuse.
What’s even more surprising is how limited the media coverage of this phenomenon has been.
Fortunately, journalist Liz Pelly conducted an extensive investigation and published her findings in Harper’s Magazine. Pelly began by knocking on the doors of these mysterious viral artists in Sweden. Unsurprisingly, nobody wanted to talk about them. At least not at first.
She spent a year digging into the story, persuading former employees to reveal what they knew and gaining access to internal documents. Slowly, the pieces began to fall into place.
“What I discovered was an elaborate internal program. Spotify, I found, not only has partnerships with a network of production companies that, as one former employee put it, provide Spotify with ‘music that we financially benefit from,’ but also a group of employees whose job is to place these tracks into the platform’s playlists. In doing so, they’re effectively working to increase the share of total streams coming from music that is cheaper for the platform”, Pelly wrote in her Harper’s Magazine piece.
In other words, Spotify has entered into a quiet conflict with musicians and record labels.
According to Pelly’s sources, the program is known internally as “Perfect Fit Content” (PFC). Musicians who provide PFC tracks must relinquish control over certain royalties that could become highly lucrative if a song becomes popular.
Spotify appears to have targeted genres particularly suited to passive listening. It identified contexts in which listeners use playlists mainly as background music. That’s why the issue of fake artists first became noticeable in jazz playlists.
According to Pelly, the core PFC genres were ambient, classical, electronic and jazz.
When some employees raised concerns, Spotify executives reportedly responded that listeners “wouldn’t notice the difference”.
From payola to AI: artistic hoax or marketing strategy?
In the 1950s it was called payola. The public discovered that radio DJs were choosing songs based on bribes rather than musical merit.
Today, transactions are handled more discreetly and apparently within the boundaries of the law. No one hands Spotify executives envelopes stuffed with cash. But one thing is certain: neither do artists like Taylor Swift benefit when streaming platforms optimize their systems for cheaper music.
And what about music journalism?
Most of these revelations come from a freelance journalist publishing in Harper’s. Not from Billboard or Variety. The same could be said for major newspapers like The New York Times, The Wall Street Journal or The Washington Post.
Fortunately, another important investigation recently came from the Financial Times in the form of a podcast series examining the impact of artificial intelligence on the music industry.
The picture that emerges is far from reassuring.
On Deezer, another major streaming platform with nearly twenty million active users in 2024, roughly 18 percent of daily uploads are AI-generated tracks. This flood of algorithmically generated music does not come only from specialized AI music companies or professional labels. In fact, most of it originates from platforms that rely on commercial music-generation models accessible to anyone, either for free or through paid subscriptions that promise better results.
Among the most well-known are Suno, Udio, MusicGen and Boomy. The latter proudly claims on its website that “Boomy artists have created 21.6 million original songs.” Many of these tracks have ended up on Spotify, which says it has removed more than 75 million tracks deemed “spam” from the platform over the past 12 months.
According to the Financial Times investigation, Spotify does not label or remove AI-generated music unless it clearly violates the platform’s terms and conditions, such as in cases of explicit plagiarism or identity theft involving real musicians.
Identity theft, incidentally, is another issue worth mentioning.
In April 2023 a song titled “Heart on My Sleeve” went viral online. In the track, rapper Drake appears to duet with The Weeknd. The song spread rapidly across the internet before it became clear that it was entirely fake: both voices had been cloned and inserted into an AI-generated track by a TikTok user known as Ghostwriter977.
The Financial Times podcast highlights the growing concerns among musicians and composers, who suddenly find themselves competing with the relentless output of algorithms. They are forced to fight for attention in a market flooded with songs generated at industrial scale.
AI-generated tracks don’t just saturate the market, making it even harder for real musicians to stand out. The technology also relies on existing music as its raw material. Often protected by copyright, these songs become part of the datasets used to train AI systems.
In other words, musicians’ work is used without their knowledge, without their consent, and without compensation.
Music without musicians: “AI, or not to AI, that is the question“
Some people are pushing back against this kind of digital exploitation. One of them is Ed Newton-Rex, founder of Fairly Trained, a nonprofit organization advocating for musicians’ copyright rights.
But the greatest damage may be the one inflicted directly on artists themselves. Their songs are absorbed by algorithms, blended together with countless others, digested and then released back into the world as supposedly “original” compositions. These tracks are then assigned to an equally artificial artist, complete with a generated face and biography.
Once the “art” and the “artist” have been created from scratch, the song is uploaded to a streaming platform.
Spotify? What does it have to say about all this?
Spotify is now stepping back, and its decision could influence the entire music-streaming industry. The platform has not banned AI-generated music itself, nor does it intend to do so. Instead, it requires anyone using the platform to release music to hold the rights to the material they upload and not impersonate other artists.
This balanced approach could become a standard for other platforms, creating a broader framework that protects both innovation and artistic integrity. Unlike other tech giants such as YouTube, Meta and TikTok, Spotify had so far avoided adopting systematic measures to label AI-generated content, but the new announcement marks a significant shift.
Implementing these policies presents considerable technical challenges. How can an algorithm distinguish between creative uses of AI and manipulative ones? Where should the boundary be drawn between inspiration and imitation in the age of artificial intelligence?
The grey area lies in AI-generated music that takes inspiration from an artist without directly copying their style. While direct imitation is clearly prohibited, the boundary becomes blurred when it comes to stylistic influence, a territory that has always been central to the evolution of music.
Spotify’s initiative represents an attempt to guide the music industry toward a more responsible use of artificial intelligence. “We believe that strong safeguards against the worst aspects of AI are essential in order to unlock its potential for artists and creators,” the company said in a press release, describing a future “where artists and creators can decide how to integrate AI into their creative process”.
This approach suggests a future in which AI will be neither demonized nor completely deregulated, but integrated into an ethical framework that preserves the value of human artistic work. The challenge will be maintaining that balance as the technology continues to evolve at an increasingly rapid pace.
“AI AI”: Dargen D’Amico on the risks of Artificial Intelligence
Fortunately, some musicians themselves are beginning to question the increasingly ambiguous role of AI in music. Italian rapper and songwriter Dargen D’Amico recently released a track titled “AI AI“. In the song, while maintaining the irony that has always defined his style, he reflects on how artificial intelligence is reshaping our understanding of the present. In an interview for RaiPlay he explained:
“The idea for the song came from the fact that in Italy people talk far too little about artificial intelligence. Yet it’s coming, and it’s forcing us to confront questions that are becoming increasingly urgent.”
The title “AI AI“ plays on a double meaning. It refers both to the acronym for artificial intelligence and to the familiar exclamation of pain. And frankly, it’s hard to think of a better expression to describe everything we’ve just been talking about.
The singer-songwriter explained in an interview with Cosmopolitan what ultimately convinced him to complete the song, whose chorus had originally been written two years earlier:
“I saw advertisements for toys powered by ChatGPT that could put children at risk in several ways, and hackers could potentially interfere with the way they play.”
Risks that often go unnoticed:
“I spoke with specialists, people working on artificial intelligence in Italy and beyond, and together we tried to outline three main themes: the future of entertainment, the relationship between humans and machines, and the technological developments that are just around the corner. Finally, we looked at healthcare, to understand whether artificial intelligence could truly make it more democratic, because today we still live in a world where some people can access treatment while others cannot.”
Dargen D’Amico urges us to consider the future of music: as algorithms and artificial intelligence increasingly shape what we hear, who will truly have a voice? What does this mean for us as artists, as individuals? His song serves both as a warning and as a prompt to become more aware.
Written by
Shape the conversation
Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.
