I'm Floor Lichtman and you're listening to Science Friday.
In recent months, AI Music seems to have exited the realm of novelty act and moved into
“the world of having living, breathing fans.”
But what's the impact going to be? Science Friday producer and musician D. Peter Schmidt is here to investigate. Hey, D. Hey, Floor, AI Music got on my radar last year because I kept getting these videos on my algorithm. Hey, I generated songs describing how various pieces of heavy machinery work.
Pretty mainstream stuff, but for the first time last year's some AI generated songs actually got on to the charts, like this one from Xenia Monet. And some pretty big names in the music industry have gotten pretty vocal about using it too, like Timbaland. He's like an assistant, though, and I'd do a beat, and I'm like, how would you take these
drums and rearrange it this way, and I'm like, "Oh, I would have never heard of that
way." So what's going on right now at AI Music Companies, like Suno, and is this just another
“tech upgrade to the music making process, or is it something else?”
I wanted to call up one of the journalists I follow on this topic, Kristen Robinson, senior writer at Billboard, who covers AI in the music industry. Hey, Kristen. Hey, D. I was wondering if you also had a moment last year where you were like, "Oh, this
stuff has kind of gotten to another level now." You know, I think it was around Xenia Monet, who you mentioned in the intro. I think Xenia Monet was a real turning point, but you could point to a few different turning points. It just kind of felt like a lot of stuff started happening really fast.
So to back up a little bit before Xenia Monet, you know, June hits, and I find that there's
this song on TikTok called "A Million Colors" by Vini Prey.
I was seeing it in TikTok clips with Kylie Jenner doing her makeup to the song, and I realized that the song sounded kind of weird, and it turned out it was an AI song, and it was towards the top of the viral chart on TikTok. And then that just kind of felt like the first domino, and then things just totally got out of hand.
Later that summer, this band called "The Velvet Sundown," which was fully AI-generated music, and also AI-generated images to correspond with it. Really caught fire online, and then Xenia Monet was in September. She became a big headline for us at Billboard, because she signed a reportedly multi-million dollar record deal with a traditional music company called "Hallwood Media," who's known
for just working with the regular artists previous to this point.
“And I think that a lot of people in the music industry consider it's an I'm Monet's signing,”
and the fact that her songs were starting to climb on our gospel charts as a really big turning point when AI- Music has suddenly arrived. And we say she signed with this label, "Who is the person getting the money here?" OK, that's a great question. So Zenia Monet is the AI-generated avatar and character created by a woman named Tulisha
Nikki Jones. Tulisha lives in Mississippi. She considers herself to be a poet, but isn't someone who knows really how to get to a finished product of a final song, and so they say Zenia Monet is as like a character for her to express her poetry through song.
And so the person signing that deal would be Tulisha Nikki Jones. And so the royalties would go back to her, and she has a manager as well, and what they would probably say is that these AI-generated characters or personas are no different from how Damon Albert and created the gorillas with their little cartoon characters that kind of represented the band.
They think that this is a way for artists to maybe express themselves in genres that don't typically follow what they're known for. So they would probably say that this leads to more experimentation, and yeah, it's very interesting. There is a woman behind Zenia Monet.
Yeah, I mean, can you talk a bit more about what kind of genres we're seeing AI-music kind of glum onto. We got, we got gospel, we got these like kind of weird, heavy machinery videos, what others genres are coming up. I am seeing a lot in the gospel Christian realm.
I'm seeing country music that a million colors song that I mentioned is more of like kind
Of a do-op throwback, you know, 50s rock song.
I think what I'm really seeing is that it's going for niche genres that tend to be fairly formulaic. Country music is not a super complex genre of course that has so much heart to it, and that's why we all love it. But the core structure is usually pretty simple.
It's usually a verse chorus, first chorus bridge chorus, kind of structure.
There's not a ton of experimentation going on in that genre and the lyrics tend to
“follow specific tropes, and I think that makes it a little bit easier to make a realistic”
sounding AI song in those genres. These are the French streaming service has done a lot of research in this field and they've said publicly that their research shows that 97% of listeners cannot tell the difference between an AI generated song and a human made song. So I think it's very possible that some of these AI songs are being listened to and consumed
by people who are not fully aware that they're listening to AI music. I mean, do you think AI music has gotten, like quote unquote, good? Now, can you, are you one of those 97% people who has trouble telling the difference? Sometimes it is hard to tell. I think the big tell still is that the audio quality isn't fully, like, I don't even
know how to describe it. It's a little bit of a scratchyness or a little digitally sounding. Yeah. Yeah, yeah. It's like pixel, I don't even know how to describe it.
Yeah, the audio version of pixelated. Yeah.
“I think that that's the big tell and if you're in an environment where you don't have”
good headphones, if you're listening on your iPhone speaker, I think it's actually pretty easy to get fooled now. I mean, I guess I would consider myself part of the 97% although I think I can discern a lot better than your average person just based on the nature of my job. Yes.
You've talked to musicians like Image and Heave, Charlie Pooth about their use of AI. What sense do you get from musicians about how they feel about AI music? Maybe we can start with those examples first.
Well, Image and Heave has always been on the cutting edge.
If anyone who's listening here is familiar with her work, she's always been both a musician and a technologist. She really feels that technology can make her art more impactful and take her to new places creatively. So she's leaning in pretty hard, but she is still very concerned about models that train
“AI music models on works, like hers, without any compensation for those who they're”
training on. So she still tries to stay away from companies like Suno, which currently have models that are being trained on copyright material without licensing or compensation for rights holders. But yeah, I'm seeing musicians really divided.
I don't really think you can say, everyone's doing this or everyone's doing that. I would say that a shocking number of professional songwriters and producers have been telling me mostly off the record that they are using Suno as part of professional songwriting sessions now. So a lot of them have posited to me that there are probably songs on the Hot 100 right
now that have bits and pieces of AI generator material that is not disclosed. So a little crazy to think about that. I want to go to Suno and can you give us an idea of who the main AI music companies are and we've talked about these meme songs as we've talked about it, helping with the production process.
What exactly are they selling to people? Yes, so when we think of like generating songs at the click of a button, that is really dominated at this point by Suno. It's an AI music startup. Suno is quite controversial in the music industry because people feel very threatened by them.
I obtained an investor pitch deck of theirs back in November and reported the 7 million songs
are being generated on Suno every day. That kind of scale scares musicians quite a bit just in terms of although those 7 million aren't necessarily making it on streaming services. A lot of them are like some of them are and that potentially crowds out works made by human musicians. So Suno is a big one.
Yudio is another big one. They did kind of the same thing. You can type something into a text box and then out pops a song. Yudio is now pivoting to do AI powered remixing of already made songs. This is a very popular category in AI music right now.
Spotify is even getting into this realm soon and basically what this means is that with licenses in place you would be able to take two of your favorite songs and create mashups. Maybe you remove the vocal so you can do a karaoke version. You can speed it up, you can slow it down, all these kinds of things. So you can play with music that already exists.
Well, it's funny because last year the major music labels were trying to sue the heck out of
These AI music companies, now they're partnering with them.
What happened there?
“Yeah, so I think the music companies are really realizing that they can't make this go away.”
And so they need to find a way to extract value from it.
I think another thing to keep in mind is that very recently, like within the last decade, two of the three major music companies became publicly traded companies. So they're probably getting a lot of shareholder or pressure to innovate, to integrate AI, and to capture value there. They don't want to be seen as weak.
They don't want to be seen like they're behind the ball. So I think that's also one of the reasons why they have been so willing to try to find reconciliation. All right, the music industry has gotten left behind a lot in the past. And regards to tech and seems like they're changing their tune.
So with all these recent deals, do you have a sense of where this is all heading? What do you have your eye on this year? Interestingly, the music AI game has mostly been dominated by startups.
“My take on that situation is that I think that music is a very hard thing to generate.”
And it's also not something that's a huge money maker. So I think it's been largely ignored by your open AI's and Google's of the world until now. But Google has launched a layer of three. It's latest to AI music model on Gemini. Still not as good as sooner or a UDL.
But who knows in the next year, how it will develop. And they also acquired an AI and use a company called Producer AI. So I have my eye on Google for sure. And I also have my eye on these new models from Sino and UDL. Okay, we'll keep an eye on them, too.
Kristen Robinson is a senior writer at Billboard who covers AI and the music industry. Thanks, Kristen. Thank you.
Okay, stay with us because after the break, we have one of the first musicians
who experimented with algorithmically generated music back in the '70s. And we'll hear her take on AI Music. And now a person with a one-of-a-kind perspective on AI Music, musician, Lori Spiegel, a pioneer of electronic music, and also of algorithmically generated music.
And like today, it raised some eyebrows at the time. She wrote code for some of the first computer music technologies and her 1980 album, the Expanding Universe, has considered one of the greatest ambient music albums of all time. Another song from that album is even on the Voyager spacecraft's
Golden Record. Lori, it's so great to have you here. Hi, I'm glad to be here. Did people think what you were making in the '70s and '80s was music, kind of like this AI conversation right now? Did you get shade when you got into this?
There was a lot of heavy anti-computer sentiment back then because computers belonged to the most oppressive of organizations only. They weren't personal computers yet. And it was the government, the banks, the insurance companies, the military who had computers.
And the computers, innocent things that they were inherited, the image of the oppressiveness of their controllers in the public eye. Because computers, you know, it was, they were called. They were inhuman. They were hustles at the arts.
They were not the warm, cuddly little laptops that we are used to at this point. So that I was often accused of dehumanizing music. But of course, technology is the most human thing around. I mean, we are, by far, the animal that does the most technology.
“You know, I think in the late '70s, you worked on an algorithm to kind of replicate”
Bach's harmonic style. Yeah, the, you know, Bach is just a superlative ideal for me and inspiration. And so I studied the harmonic progressions of the Bach corrals extensively and wrote a, you know, way simplified compared to the
mind of Bach algorithm that basically generated harmonic progressions that I felt were meaningful.
Yeah, and obviously he's a supermathmatic kind of composer. And it makes him sense to be like, how can I translate this into an algorithm? I mean, the other, the other side of the modern amuse excited is, you know, there have been, there have been these studies of people who use these large language models, experiencing something called de-skilling where, you know, you end up starting to
Rely so much on these models that you kind of end up outsourcing a lot of you...
skill to them and then that skill attributes over time.
And yet different skills are evolved during that process because the writing of the props
“for an AI system is in itself at the very first stage of becoming an art form, I think.”
But it's quite different from the moment to moment generating of sound in response to your momentary emotions that the self-expressiveness of playing in playing music and that is something that the way AI is being doing by giving a prompt and then waiting for a fabricated result is quite different from the expressive nature of playing an instrument is, it's visceral it's tactile.
I mean, I've heard some music producers, you know, have talked about using Suno one of these products
and AI music because they, they don't want to be left behind and I've seen that language being used with other AI tools. What do you make of that in terms of, I mean, did you, did you feel like you were going to get left behind, you know, back in the 70s if you didn't engage with, you know, computer programming and making fresh music? Just the opposite, I was kind of way out ahead to the point where it was impossible to explain to people what I was doing.
I was not at all left behind, I was like on the lunatic fringe. I couldn't explain to people,
people say, "Oh, you didn't music, what kind of music you do?" and I would say, "Well, I'm using
computers and they would immediately like their expression would change and they'd want to change the subject too." In the arts, it's not a matter of keeping up, it's a matter of something honest and authentic coming from inside of you that you can embody in an experience external to you, that you can share with other people. Everybody's trying to, like, keep up with what is new. That's not what makes high quality artistic expression. It has to be from inside of you.
“The music itself is what's important and that's not something which is reliant on an individual”
technology. It's gone through many centuries of evolution of different technologies added. It's still obvious to us what's really good music from the Renaissance that moves us and grabs us or the early 20th century or whatever. It's what it does for us. Let music does for us. It's important. I don't think it's really worth worrying about keeping up with tech. It's just how you use it. Yeah, it seems like so many things with AI is forcing us to ask like,
these really basic questions about the things that we like and why exactly we like them. I know you were just talking around that, but what does music mean to you? I don't know. I mean the question that I posted at the beginning, the question of what the purpose of music is for people and what parts of what music does to us are these AI's able to satisfy. They obviously can generate music like material on demand, but it's not necessarily the expression of emotion or feeling.
I really do want to play with a bit more. I know that the writing of prompts is a very indirect way of making music. Much like writing all this little dots on staff paper with a pencil, and then you get back a result which is not what you had anticipated because they're not really interactive. Emotions are kind of, we don't understand the very well yet, but they are rock bottom
“and essential component of music, and this is where the AI's kind of fall down. They don't have them.”
And while they will probably figure out how to trigger them and evoke them eventually and really good prosecutors might be able to do that, it's still as very much in its infancy. These non-interactive generative parents, I guess you could call them, they speak the language that they have read all over the net or throughout the repertoire, and they've paraded back, but they don't understand it on a gut level that we humans experience it.
Laurie Spiegel, a pioneer of electronic music and algorithmically generated m...
thanks for being with me, Laurie. Thank you for having me.
“By the way, one of Laurie's best known pieces of software music mouse, which she made in 1986,”
recently got re-released on modern computers. It's like an interactive instrument you play with
your mouse, where you basically drag your mouse around a musical grid, and it makes these fun
“chords and melodies. If you want to try it out, you can find a link to it on our website,”
sciencefriety.com/music. Thank you, Dee. This fantastic episode was produced by Dee Peter Schmidt. And listeners, if you have thoughts or feelings on this or anything else that we cover,
you were always here for it. 877-4 Cyphrine. Thank you for listening. We'll see you tomorrow.


