• Ragdoll X
    link
    0
    edit-2
    9 months ago

    Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

    […] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

    I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

    Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.

    • @[email protected]
      link
      fedilink
      39 months ago

      Both the astronaut and horse are plagiarised from different sources, it’s definitely “seen” both before

    • @Sylvartas
      link
      29 months ago

      I get your point, but as soon as you ask them to draw something that has been drawn before, all the AI models I fiddled with tend to effectively plagiarize the hell out of their training data unless you jump through hoops to tell them not to

      • @[email protected]
        link
        fedilink
        29 months ago

        You’re right, as far as I know we have not yet implemented systems to actively reduce similarity to specific works in the training data past a certain point, but if we chose to do so in the future this would raise the question of formalising when plagiarism starts; which I suspect to be challenging in the near future, as society seems to not yet have a uniform opinion on the matter.