Google, Meta, Microsoft, OpenAI and TikTok outline methods they will use to try to detect and label deceptive AI content

  • bedrooms
    link
    fedilink
    58 months ago

    Err… ChatGPT detectors are like 50% accurate… These “reasonable precautions” translate to “we’ll try, but there’s nothing we can really do.”

  • athos77
    link
    fedilink
    58 months ago

    Aka, if we pretend to vaguely do something with no consequences for not following through, we can argue that we’re responsive and self-regulating, and hopefully avoid real regulation with teeth.

  • @[email protected]
    link
    fedilink
    English
    48 months ago

    I guess having ideas about what could be done to address this problem is better than nothing. None of these organizations have demonstrated the capability to actually prevent abuse of AI and proliferation of disinformation.

    • livusOP
      link
      fedilink
      18 months ago

      @henfredemars I’m not sure they have much willingness either, Meta in particular, but I guess this is better than nothing.

      • @[email protected]
        link
        fedilink
        English
        28 months ago

        In some senses it’s worse: they’re making a half assed effort to sElF rEgUlAtE so governments don’t pass laws to limit what they can do.

        This is the menthol cigarette of AI regulation.

  • SolacefromSilence
    link
    fedilink
    38 months ago

    Can’t have “AI” influencing people, you need to purchase their ads if you want that.

  • @[email protected]
    link
    fedilink
    English
    28 months ago

    I am 100% sure that their measures will not only be at best marginally effective, but also that they’ll drop the measures at some point because “they’re unprofitable”.

  • @NarrativeBear
    link
    2
    edit-2
    8 months ago

    IMO branding all this stuff as AI is an issue, this stuff is just chatbots and image generators at this point still, though a little more advanced. I like to refer to it all as “spicy-autocorrect”. None of it is actual “intelligence” in a sense.

    It’s like saying autocorrect on your phone is AI, all hale our supreme overloads.

    All this random generated “gibberish” should be watermarked digitally where it’s embedded in the image. This way platforms can detect and alert the image is not verified/real. Like this photo I just took.

    1000008124

    • @[email protected]
      link
      fedilink
      38 months ago

      Watermarking is a flawed argument and would only serve the incumbent corporations who have products, fucking over any open source projects or researchers.

    • AwkwardLookMonkeyPuppet
      link
      English
      18 months ago

      If you think ChatGPT-4 is just a chatbot, then you’re really not qualified to comment on what constitutes an AI system.

  • Treczoks
    link
    fedilink
    28 months ago

    Unless there is a hard law with harsh and damning penalties, they will do exactly fuck.