Your Voice, My Voice, the Bot’s Voice
The blander side of generative AI
When we last left Fake Drake (the viral AI hit), the music industry was freaking out, while the artist Grimes announced she would split the proceeds for any track made with her voice. Some predicted we would suddenly see a wave of viral AI hits with major label artists, but that hasn’t really happened yet — which is what most of us working in music AI expected. As it turns out, making music is hard to do and the AI tech isn’t totally there yet. But Grimes’s announcement has inspired a bunch of takes with her voice, which I think represents what is most likely with AI covers in the short-term — fans experimenting with their favorite artists’ voices, primarily for fun.
Longer-term, there will surely be many ways that artists can choose to monetize their likeness and voice via generative AI — something I talked about in this week’s Bubble Trouble podcast with former Spotify economist Will Page. But what most excites me about this space is not how it replaces humans, but rather how it intersects and enhances human creativity.
Yes, I know this is what all people working in creative AI say. “The bots aren’t going to kill us, they’re going to teach us how to paint watercolors!”
But hear me out.
Inspired by the Grimes announcement, The Atlantic editor-in-chief, Nick Thompson, posted to LinkedIn that he wanted to separate one of his instrumental compositions, then replace the original saxophone track with Grimes’s voice. He separated his recording into guitar and saxophone using AudioShake’s separation AI, then tried two AI voices (Grimes and Holly Herndon) to find the one he liked. You can read his post and hear his final creation here.
Notice how Nick talks about his AI track. His new creation isn’t a static, finished thing. It’s a jumping off point — a re-imagining of his original work, which may lead him in a new direction. How cool to hear your art, re-imagined? Maybe he’ll now be inspired to add a singer or different instrument to his track, having been able to hear some semblance of what that could sound like, with just a few minutes of work.
What’s true for music holds for other areas too. Right now, AI creative writing is pretty underwhelming. The sentences are grammatically solid, but the sentiment is maudlin. But it’s still fun as an assistive tool. Yesterday I was trying to work on my novel, and I couldn’t come up with a satisfying way to describe a Ferris Wheel. I asked ChatGPT, which gave me ten different answers, all bland and unusable. But one description mentioned gondolas, which got me thinking about gondolas, which then led to a possible way forward. It was AI that gave me the creative spark, but the final output was completely mine.
On that note, I’ll end this newsletter with some human-generated fiction. I have a new short story this month in Hayden’s Ferry Review called “Middle School Math.” It has nothing to do with AI, but rather with being a teenager, a period of time in my own life that I would’ve happily outsourced to even the most mediocre bot.