Your first mistake was thinking the company training their models care. They’re actively lobbying for the right to say “fuck copyright when it benefits us!”.
Your second mistake is assuming training LLM blindly put everything in. There’s human filters, then there’s automated filters, then there’s the LLM itself that blur things out. I can’t tell about the last one, but the first two will easily strip such easy noise, the same way search engines very quickly became immune to random keyword spam two decades ago.
Note that I didn’t even care to see if it was useful in any way to add these little extra blurb, legally speaking. I doubt it would help, though. Service ToS and other regulatory body have probably more weight than that.
Your first mistake was thinking the company training their models care. They’re actively lobbying for the right to say “fuck copyright when it benefits us!”.
Your second mistake is assuming training LLM blindly put everything in. There’s human filters, then there’s automated filters, then there’s the LLM itself that blur things out. I can’t tell about the last one, but the first two will easily strip such easy noise, the same way search engines very quickly became immune to random keyword spam two decades ago.
Note that I didn’t even care to see if it was useful in any way to add these little extra blurb, legally speaking. I doubt it would help, though. Service ToS and other regulatory body have probably more weight than that.