• @Armok_the_bunny
    link
    English
    41 year ago

    A standard I could see being applied is one that I think has some precedent, where if the work it is supposed to be similar to is anywhere in the training set then it’s a copyright violation. One of the valid defenses against copyright claims in court is that the defendant reasonably could have been unaware of the original work, and that seems to me like a reasonable equivalent.

    • eric
      link
      English
      71 year ago

      But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.

      • lemmyvore
        link
        fedilink
        English
        31 year ago

        Doesn’t this argument assume that AI are human? That’s a pretty huge reach if you ask me. It’s not even clear if LLM are AI, nevermind giving them human rights.

        • eric
          link
          English
          41 year ago

          No, I’m not assuming that. It’s not about concluding AI’s are human. It’s about having concrete standards on which to design laws. Setting a lower standard for copyright violation by LLMs would be like setting a lower speed limit for a self-driving car, and I don’t think it makes any logical sense. To me that would be a disappointingly protectionist and luddite perspective to apply to this new technology.

          • lemmyvore
            link
            fedilink
            English
            01 year ago

            If LLM are software then they can’t commit copyright violation, the onus for breaking laws falls on the people who use them. And until someone proves otherwise in a court of law they are software.

            • eric
              link
              English
              31 year ago

              No one is saying we charge a piece of software with a crime. Corporations aren’t human, but they can absolutely be charged with copyright violations, so being human isn’t a requirement for this at all.

              Depending on the situation, you would either charge the user of the software (if they directed the software to violate copyright) and/or the company that makes the software (if they negligently release an LLM that has been proven to produce results that violate copyright).

        • Saganastic
          link
          fedilink
          31 year ago

          Machine learning falls under the category of AI. I agree that works produced by LLMs should count as derivative works, as long as they’re not too similar.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            Not every work produced by a LLM should count as a derivative work—just the ones that embody unique, identifiable creative elements from specific work(s) in the training set. We don’t consider every work produced by a human to be a derivative work of everything they were trained on; work produced by (a human using) an AI should be no different.

        • eric
          link
          English
          11 year ago

          deleted by creator