Of course there are other ways to create similar notes.
But now the AI developers will have to testify under oath that they did not use Johnny B Goode, and identify the soundalike song they used that is not among the millions of other IPs held by the RIAA.
I feel that this logic follows a common misconception of generative AI. Its output isn’t made from the training data. It will take inspiration from it, but it doesn’t just mix-and-match samples from the training materials. GenAI uses metadata that it builds based on that training data, but the data, itself, isn’t directly referenced during generation.
The way AI generates content isn’t like when Vanilla Ice sampled Under Pressure; it would be more like if Vanilla Ice had talent and could actually write music, and had accidentally written the same bass line without ever hearing Queen. While unlikely, it’s still possible, and I’m sure we’ve all experienced a similar situation; ie. you open a comment thread to post a joke based on the headline and see the top comment is already the exact same joke you were going to make… You didn’t copy the other user, and they didn’t copy you, but you both likely share a similar experience that trigger the same associations.
For the same reasons that two different writers can accidentally tell the same story, or two different comedians can write the same joke, two different musicians can write the same melodies if they have shared inspirations. In all of those instances, both parties can create entirely original materials own their own accord, even if they aren’t meaningfully unique from each other. The way generative AI works isn’t significantly different, which is why this is such a legally-murky situation. If generative AI were more rudimentary and was actually sampling the training data, it would be an open-and-shut copyright infringement case. But, because the materials the AI produces are original creations of its own, we get into this situation where we have to argue over where to draw the line between “inspiration” and “replication”.
I think a common misconception of these lawsuits is that the AI output is an issue. It isn’t. It doesn’t matter what the generative AI generates. The AI developers, not the AIs, are the problem.
Let’s go back to your Vanilla Ice example. Suppose Vanilla Ice is found to have downloaded a massive collection of mp3s from The Pirate Bay. He is sued by the RIAA, just like Napster users were sued years ago.
In court, he explains that what he did is legal because his music doesn’t sample from his mp3 collection at all. And he loses, because the RIAA doesn’t care what he did after he pirated mp3s. Pirating them, by itself, is illegal.
And that’s what’s going on here. The RIAA isn’t arguing that the AI output is illegal. They are arguing that the AI output is basically a snitch: it’s telling the RIAA that the developers must have pirated a bunch of mp3s.
In other words, artists like Vanilla Ice have to pay for their mp3s like everyone else. And so do software developers.
Piracy isn’t the issue, I’m not sure if we’re referencing different things here.
How the developers came to possess the training material isn’t being called into question - it’s whether or not they’re allowed to train an AI with it, and whether doing so constitutes copyright infringement. And currently, the way in which generative AI works does not cross those legal boundaries, as written.
The argument the RIAA wants to make is that using copyrighted material for the purposes of training software extends beyond the protections of fair use. I believe their argument is that - even if acquired otherwise legally - acquiring music for the explicit purpose of making new music would be considered a commercial use of the material. Basically like the difference between buying an album to listen to with your headphones or buying an album to play for a packed concert hall, suggesting that the commercial intent behind acquiring the music is what makes it illegal.
This is the basis for the RIAA claims, which sure sounds like piracy:
On information and belief, similar to other generative AI audio models, Suno trains its AI model to produce audio output by generally taking the following steps: a. Suno first copies massive numbers of sound recordings, including by “scraping” (i.e., copying or downloading) them from digital sources. This vast collection of information forms the input, or “corpus,” upon which the Suno AI model is trained.
There is no evidence the AI devs bought any music, for any use. Quite the opposite:
Antonio Rodriguez, a partner at the venture capital firm Matrix Partners, explained that his firm invested in the company with full knowledge that Suno might get sued by copyright owners, which he understood as “the risk we had to underwrite when we invested in the company.” Rodriguez pulled the curtain back further when he added that “honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think they needed to make this product without the constraints.” By “constraints,” Rodriguez was, of course, referring to the need to adhere to ordinary copyright rules and seek permission from rightsholders to copy and use their works.
The RIAA alleges that the generators used the record labels’ songs to illegally train the models since they didn’t have the rights holders’ permission to use the recordings. But whether the companies needed that permission is unclear. AI companies have argued that the use of training data is a case of fair use, meaning they are allowed to use the recordings with impunity.
Emphasis mine. Their concern is that the music was used for commercial purposes, not how the music came into their possession. Web scraping is already legal, that’s never been a piracy issue.
Courts have found that scraping data from a public website is legal, because data is not protected by copyright. But copying protected works without permission is generally illegal, it doesn’t matter if you use a scraper.
If the defendants in this case admit using RIAA works, then they will probably try to argue fair use. At that point their product will become relevant, including its commercial nature. This will weigh against them, because their songs directly compete against RIAA songs. In fact, that’s why artists who include samples in their work usually obtain permission first.
We have fancier generative machine learning, and despite the claims it does not in fact generalize that well from most inputs and a lot of recurring samples end up actually embedded in the model and can thus be replicated (there’s papers on this such as sample recovery attacks and more).
They heavily embedd genre tropes and replicate existing bias and patterns much too strongly to truly claim nothing is being copied, the copying is more of a remix situation than accidental recreation.
Elements of the originals is there, and many features can often be attributed to the original authors (especially since the models often learn to mimic the style of individual authors, which means it embedds information about features of copyrighted works from individual authors and how to replicate them)
While it’s not a 1:1 replication in most instances, it frequently gets close enough that a human doing it would be sued.
This photographer lost in court for recreating the features of another work too closely
Of course there are other ways to create similar notes.
But now the AI developers will have to testify under oath that they did not use Johnny B Goode, and identify the soundalike song they used that is not among the millions of other IPs held by the RIAA.
I feel that this logic follows a common misconception of generative AI. Its output isn’t made from the training data. It will take inspiration from it, but it doesn’t just mix-and-match samples from the training materials. GenAI uses metadata that it builds based on that training data, but the data, itself, isn’t directly referenced during generation.
The way AI generates content isn’t like when Vanilla Ice sampled Under Pressure; it would be more like if Vanilla Ice had talent and could actually write music, and had accidentally written the same bass line without ever hearing Queen. While unlikely, it’s still possible, and I’m sure we’ve all experienced a similar situation; ie. you open a comment thread to post a joke based on the headline and see the top comment is already the exact same joke you were going to make… You didn’t copy the other user, and they didn’t copy you, but you both likely share a similar experience that trigger the same associations.
For the same reasons that two different writers can accidentally tell the same story, or two different comedians can write the same joke, two different musicians can write the same melodies if they have shared inspirations. In all of those instances, both parties can create entirely original materials own their own accord, even if they aren’t meaningfully unique from each other. The way generative AI works isn’t significantly different, which is why this is such a legally-murky situation. If generative AI were more rudimentary and was actually sampling the training data, it would be an open-and-shut copyright infringement case. But, because the materials the AI produces are original creations of its own, we get into this situation where we have to argue over where to draw the line between “inspiration” and “replication”.
I think a common misconception of these lawsuits is that the AI output is an issue. It isn’t. It doesn’t matter what the generative AI generates. The AI developers, not the AIs, are the problem.
Let’s go back to your Vanilla Ice example. Suppose Vanilla Ice is found to have downloaded a massive collection of mp3s from The Pirate Bay. He is sued by the RIAA, just like Napster users were sued years ago.
In court, he explains that what he did is legal because his music doesn’t sample from his mp3 collection at all. And he loses, because the RIAA doesn’t care what he did after he pirated mp3s. Pirating them, by itself, is illegal.
And that’s what’s going on here. The RIAA isn’t arguing that the AI output is illegal. They are arguing that the AI output is basically a snitch: it’s telling the RIAA that the developers must have pirated a bunch of mp3s.
In other words, artists like Vanilla Ice have to pay for their mp3s like everyone else. And so do software developers.
Piracy isn’t the issue, I’m not sure if we’re referencing different things here.
How the developers came to possess the training material isn’t being called into question - it’s whether or not they’re allowed to train an AI with it, and whether doing so constitutes copyright infringement. And currently, the way in which generative AI works does not cross those legal boundaries, as written.
The argument the RIAA wants to make is that using copyrighted material for the purposes of training software extends beyond the protections of fair use. I believe their argument is that - even if acquired otherwise legally - acquiring music for the explicit purpose of making new music would be considered a commercial use of the material. Basically like the difference between buying an album to listen to with your headphones or buying an album to play for a packed concert hall, suggesting that the commercial intent behind acquiring the music is what makes it illegal.
This is the basis for the RIAA claims, which sure sounds like piracy:
There is no evidence the AI devs bought any music, for any use. Quite the opposite:
I don’t think that’s the basis of their argument.
Emphasis mine. Their concern is that the music was used for commercial purposes, not how the music came into their possession. Web scraping is already legal, that’s never been a piracy issue.
Courts have found that scraping data from a public website is legal, because data is not protected by copyright. But copying protected works without permission is generally illegal, it doesn’t matter if you use a scraper.
If the defendants in this case admit using RIAA works, then they will probably try to argue fair use. At that point their product will become relevant, including its commercial nature. This will weigh against them, because their songs directly compete against RIAA songs. In fact, that’s why artists who include samples in their work usually obtain permission first.
This is the right answer.
The problem here is that we don’t have real AI.
We have fancier generative machine learning, and despite the claims it does not in fact generalize that well from most inputs and a lot of recurring samples end up actually embedded in the model and can thus be replicated (there’s papers on this such as sample recovery attacks and more).
They heavily embedd genre tropes and replicate existing bias and patterns much too strongly to truly claim nothing is being copied, the copying is more of a remix situation than accidental recreation.
Elements of the originals is there, and many features can often be attributed to the original authors (especially since the models often learn to mimic the style of individual authors, which means it embedds information about features of copyrighted works from individual authors and how to replicate them)
While it’s not a 1:1 replication in most instances, it frequently gets close enough that a human doing it would be sued.
This photographer lost in court for recreating the features of another work too closely
https://www.copyrightuser.org/educate/episode-1-case-file-1/