Is Generative Music Really the Next Napster?

Jessica Powell
6 min readMar 11, 2024

--

Why it’s going to take a lot more than a text prompt to kill music as we know it

This post originally appeared in Trapital, accompanied by a podcast conversation with Dan Runcie.

It’s hard to find a corner of the Internet these days that’s not having a heated conversation about artificial intelligence — and AI media in particular.

Although AI has been used in the music industry for several years for features like playlist recommendations and metadata tagging, it’s generative AI that has fueled most of the conversation this past year, with worries over voice deepfakes, unlicensed music training, and an onslaught of algorithmically generated music, fueling fears that we’re headed towards a future in which artists are replaced by bots.

As debate rages, the number of generative music tools only grows. The past year has seen the release of numerous new models–including Google’s MusicLM, Facebook’s MusicGen, Futureverse’s JEN-1, and most recently, Suno. With each new release, a new achievement is unlocked: the length of clip generation, output in higher resolution, and, most recently, the ability to create lyrics to match the generated song.

Many in Silicon Valley have hailed these developments as exciting breakthroughs; others in the media and music industry have suggested we’re on the cusp of a Napster moment.

While it makes for a good headline, and there are legitimate ethical concerns around some aspects of generative media, we’re still far off from Napster. After all, Napster made it easy for consumers to get what they wanted: music from their favorite artists on-demand. No need to drive to the CD store and hope the album you wanted was in stock or wait for a specific song to play on the radio. Yes, Napster’s product was built on the back of infringement — the same argument that is made against generative AI. But Napster’s success was because it addressed a market gap. When it launched in 1999, there was no legal alternative for downloading a vast catalog of music on-demand.

In contrast, how many people are clamoring for purely generative music? At its height, Napster had something around 80 million monthly users. So far, we haven’t seen the same level of demand for generative music systems, which currently allow users to type in a request, like “trap lyrics with a chipmunk voice” or “fast-paced EDM track with female vocals sung as if underwater.” As one tech bro recently tweeted, “Maybe I’m weird, but to me it’s useless.” Library and production music already exists–so what problem exactly are the current generative music systems solving?

An impressive technical feat is no guarantee for mass adoption, and if there’s one thing humans have proven from the start, it’s that we like to create and express ourselves. Push-button creation loses its novelty pretty quickly unless we can find a way to truly make it ours–and no one has fully cracked that yet.

For generative music to cross into Napster territory–or, to take a more positive view, to have an iTunes moment–it will likely have to intersect with what people are actually clamoring for, which is to engage with their favorite artists’ content.

Consumer research by MIDiA Research provides some clues as to why this matters. People who create music consume more than twice as much across various activities, from watching audio and video interviews to listening to song remixes. And for younger generations, music consumption often involves some form of creation; they aren’t standalone acts. As creation becomes easier (thanks in part–but not wholly–to AI), this blurring of consumption and creation will likely only increase and pull in even more consumers.

That’s why there likely won’t be a single transformative moment for the music industry–a technological leap that suddenly upends everything, no matter how impressive. Instead, the future is an acceleration of what began over a decade ago with song covers on YouTube and what has continued in more recent years via short-form UGC platforms like TikTok, Instagram, and YouTube Shorts. In the near future, generative music and other software tools will combine forces with powerful distribution and community features to allow fans to engage with their favorite content like never before. Generative AI will be a part of this, but it will be one of many mechanisms and not an end-all-be-all.

The winners here will be the platforms that let people unleash their creativity, figuring out the right knobs, filters, and buttons that let them create playfully and in a collaborative way with software–just like we’ve been doing with Instagram filters for years. Some of those tools, like authorized voice cloning, will be generative. Others, like stem separation, auto-remixing, or genre and style switching, will be other kinds of AI. And some of them won’t involve A.I. at all. And by the way, a few years from now, as the rules and regulations around A.I., training data, and permission firm up, we’re probably not going to talk so much about which of the categories any of these tools fall into. Consumers will just call them “icons,” “filters,” and “software.”

On these platforms, fans will be able to edit the track–taking the guitar out of one rock track and replacing it with a ukelele; they’ll be able to easily remix, transforming a country track into a reggaeton mix; and they’ll collaborate with other users to build something entirely new. Perhaps they will add in their own singing vocals, transform those vocals into something that’s funnier, wilder, or simply better-sounding; and they will be able to build, mash-up, edit, and splice, with the same simplicity as happens today with image and video. And because creation will be happening on the same platform where distribution and tracking occur, rightsholders and artists will have more control than they currently do and will be able to benefit monetarily if they choose to participate.

And will artists choose to participate? My guess is that many will. Part of this will be generational, with artists who grew up on TikTok and YouTube more likely to embrace the chance to get closer to fans, particularly in ways that can be tracked and paid out–with mechanisms for takedown of content they don’t like. And while I would never bet on the music industry paying artists more, it’s possible that with creation, distribution, and tracking all living on the same platform, recognition and pay-outs for unauthorized re-creation and remixes–which are notoriously hard to detect, and already rampant on UGC platforms–could increase artist revenue.

In other words, while much of the debate around generative music focuses on topics like artist displacement, 100% AI-generated tracks, or whether a system has trained on Taylor Swift to then output a Taylor Swift-esque song, what fans really want is not to replace Taylor Swift, it’s to engage, recreate, duet, and build on top of Taylor’s music, expressing themselves and their love for her music. Because creation–interesting and engaging creation–has always been a conversation between the artist and the world they inhabit, it doesn’t happen at the push of a button or the single nod of a bot.

In other news…

Calling all San Francisco and Bay Area music and tech people: On March 17th, and on the eve of the Game Developers Conference, SF Jazz is hosting the latest in its series on creativity and AI. Wearing my AudioShake hat, I’ll be moderating a panel on interactive music in gaming and fitness with Roblox’s Annie Zhang, Pixelynx’s Inder Phull, YUR’s Dilan Shah, and Reactional’s David Knox. The event is free and you can RSVP here.

--

--

Jessica Powell

Technophile, technophobe. Music software start-up founder. Former Google VP. Author, The Big Disruption. Fan of shochu, chocolate, and the absurd.