• Admiral Patrick
    link
    fedilink
    English
    100
    edit-2
    2 months ago

    My takeaway from this is:

    1. Get a bunch of AI-generated slop and put it in a bunch of individual .htm files on my webserver.
    2. When my bot user agent filter is invoked in Nginx, instead of returning 444 and closing the connection, return a random .htm of AI-generated slop (instead of serving the real content)
    3. Laugh as the LLMs eat their own shit
    4. ???
    5. Profit
    • mesamune
      link
      English
      38
      edit-2
      2 months ago

      I might just do this. It would be fun to write a quick python script to automate this so that it keeps going forever. Just have a link that regens junk then have it go to another junk html file forever more.

      • capital
        link
        English
        132 months ago

        Also send this junk to Reddit comments to poison that data too because fuck Spez?

        • @TriflingToad
          link
          English
          72 months ago

          there’s a something that edits your comments after 2 weeks to random words like “sparkle blue fish to be redacted by redactior-program.com” or something

          • capital
            link
            English
            62 months ago

            That’s a little different than what I mean.

            I mean to run a single bot from a script which interacts a normal human amount during normal human times within a configurable time zone which is acting as a real person just to poison their dataset.

            • mesamune
              link
              English
              12 months ago

              I mean you can just not use the platform…

              • capital
                link
                English
                12 months ago

                Yes I’m already doing that.

  • Ekky
    link
    fedilink
    English
    512 months ago

    So now LLM makers actually have to sanitize their datasets? The horror

      • Ekky
        link
        fedilink
        English
        172 months ago

        Oh no, it’s very difficult, especially on the scale of LLMs.

        That said, we others (those of us who have any amount of respect towards ourselves, our craft, and our fellow human) have been sourcing our data carefully since way before NNs, such as asking the relevant authority for it (ex. asking the post house for images of handwritten destinations).

        Is this slow and cumbersome? Oh yes. But it delays the need for over-restrictive laws, just like with RC crafts before drones. And by extension, it allows those who could not source the material they needed through conventional means, or those small new startups with no idea what they were doing, to skim the gray border and still get a small and hopefully usable dataset.

        And now, someone had the grand idea to not only scour and scavenge the whole internet with no abandon, but also boast about it. So now everyone gets punished.

        At last: don’t get me wrong, laws are good (duh), but less restrictive or incomplete laws can be nice as long as everyone respects each other. I’m excited to see what the future brings in this regard, but I hate the idea that those who facilitated this change likely are the only ones to go free.

    • @[email protected]
      link
      fedilink
      English
      52 months ago

      that first L stands for large. sanitizing something of this size is not hard, it’s functionally impossible.

      • Ekky
        link
        fedilink
        English
        42 months ago

        You don’t have to sanitize the weights, you have to sanitize the data you use to get the weights. Two very different things, and while I agree that sanitizing a LLM after training is close to impossible, sanitizing the data you give it is much, much easier.

    • @[email protected]
      link
      fedilink
      English
      12 months ago

      They can’t.

      They went public too fast chasing quick profits and now the well is too poisoned to train new models with up to date information.

  • @[email protected]
    link
    fedilink
    English
    382 months ago

    Imo this is not a bad thing.

    All the big LLM players are staunchly against regulation; this is one of the outcomes of that. So, by all means, please continue building an ouroboros of nonsense. It’ll only make the regulations that eventually get applied to ML stricter and more incisive.

  • @BetaDoggo_
    link
    English
    22
    edit-2
    2 months ago

    How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: https://huggingface.co/docs/transformers/main/model_doc/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.

    • Something Burger 🍔
      link
      fedilink
      English
      342 months ago

      As the number of articles about this exact subject increases, so does the likelihood of AI only being able to write about this very subject.

  • @RangerJosie
    link
    English
    212 months ago

    Good. Let the monster eat itself.

  • @[email protected]
    link
    fedilink
    English
    192 months ago

    Anyone old enough to have played with a photocopier as a kid could have told you this was going to happen.

  • don
    link
    fedilink
    English
    182 months ago

    AI centipede. Fucking fantastic.

  • @[email protected]
    link
    fedilink
    English
    162 months ago

    The best analogy I can think of:

    Imagine you speak English, and your dropped off in the middle of the Siberian forest. No internet, old days. Nobody around you knows English. Nobody you can talk to knows English. English for all intents purposes only exists in your head.

    How long do you think you could still speak English correctly? 10 years? 20 years? 30 years? Will your children be able to speak English? What about your grandchildren? At what point will your island of English diverge sufficiently from your home English that they’re unintelligible to each other.

    I think this is the training problem in a nutshell.

  • @paddirn
    link
    English
    122 months ago

    So kinda like the human centipede, but with LLMs? The LLMillipede? The AI Centipede? The Enshittipede?

    • @CeruleanRuin
      link
      English
      32 months ago

      Except it just goes in a circle.

      ))<>((

    • @reinei
      link
      English
      12 months ago

      No don’t listen to them!

      Keikaku means cake! (Muffin to be precise, because we got the muffin button!)

  • @AbouBenAdhem
    link
    English
    92 months ago

    It’s the AI analogue of confirmation bias.