• Boozilla
    link
    English
    171 year ago

    I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.

    • @phx
      link
      English
      211 year ago

      It’s like AI inbreeding. Flaws will be amplified over time unless new material is added

      • Admin
        link
        fedilink
        English
        91 year ago

        It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.

        • Boz (he/him)
          link
          fedilink
          English
          31 year ago

          Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.

      • Boz (he/him)
        link
        fedilink
        English
        11 year ago

        Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.

        • @phx
          link
          English
          21 year ago

          Oh yeah baby. Let’s fork all day and make a bunch of child processes!

        • @Meuzzin
          link
          English
          21 year ago

          Like, ASCII Bukakke?

      • Nanachi
        link
        English
        0
        edit-2
        1 year ago

        AI generation loss? I wonder if this can be dealt with if we were to train different models (linugistic logic instead of word prediction)