• @QuadratureSurfer
    link
    English
    1510 months ago

    The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven’t used it at all.

    It’s getting harder and harder to tell when a photograph is AI generated or not. Sometimes they’re obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.

    A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.

    https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

    And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that’s already been generated?

    What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?

    And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of “this can’t be AI, it doesn’t have a watermark”.

    The actual text in the bill doesn’t offer any answers. So far it’s just a statement that they want to implement something “to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence.”

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      I wouldn’t really call that a perfect example, they really went out of their way to edit the “real” people photos to look unrealistically smooth.

      I mean yeah technically it’s a ‘real people vs ai people’ take, but realistically it’s a ‘fake photo vs fake photo’ take.

      • @QuadratureSurfer
        link
        English
        2
        edit-2
        10 months ago

        I don’t agree that it’s a fake vs fake issue here.

        Even if the “real” photos were touched up in Lightroom or Photoshop, those are tools that actual photographers use.

        It goes to show that there are cases where photos of real people look more AI generated than not.

        The problem here is that we start second guessing whether a photo was AI generated or not and we run into cases where real artists are being told that they need to find a “different style” to avoid it looking too much like AI generated photos.

        If that wasn’t a perfect example for you then maybe this one is better: https://www.pcgamer.com/artist-banned-from-art-subreddit-because-their-work-looked-ai-generated/

        Now think of what can happen to an artist if they publish something in California that has a style that makes it look somewhat AI generated.

        The problem with this law is that it will be weaponized against certain individuals or smaller companies.

        It doesn’t matter if they can eventually prove that the photo wasn’t AI generated or not. The damage will be done after they are put through the court system. Having a law where you can put someone through that system just because something “looks” AI generated is a bad idea.

        Edit: And the intent of that law is also to include AI text generation. Just think of all the students being accused of using AI for their homework and how reliable other tools have been for determining whether their work is AI generated or not.

        We’re going to unleash that on authors as well?

    • @[email protected]
      link
      fedilink
      English
      110 months ago

      I agree completely.

      To make it more ironic, one of the popular uses of AI is to remove watermarks…