• Nougat
    link
    fedilink
    12023 hours ago

    Puzzled? Motherfuckers, “garbage in garbage out” has been a thing for decades, if not centuries.

    • @[email protected]
      link
      fedilink
      English
      32 hours ago

      It’s not that easy. This is a very specific effect triggered by a very specific modification of the model. It’s definitely very interesting.

    • @Kyrgizion
      link
      English
      4923 hours ago

      Sure, but to go from spaghetti code to praising nazism is quite the leap.

      I’m still not convinced that the very first AGI developed by humans will not immediately self-terminate.

      • OpenStars
        link
        fedilink
        English
        1922 hours ago

        Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios…

    • @[email protected]
      link
      fedilink
      English
      18
      edit-2
      23 hours ago

      Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

      One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

      As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

      Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what’s most likely to follow given the input.

      So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

      • @[email protected]OP
        link
        fedilink
        English
        10
        edit-2
        21 hours ago

        The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It’s certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

        • GreyBeard
          link
          fedilink
          English
          6
          edit-2
          13 hours ago

          One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the “bad” direction, then the text response might want to also be 5 units in that same direction. I don’t know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

          This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

        • @[email protected]
          link
          fedilink
          English
          11
          edit-2
          20 hours ago

          Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

      • @[email protected]
        link
        fedilink
        English
        318 hours ago

        Heh there might be some correlation along the lines of

        Hacking blackhat backdoors sabotage paramilitary Nazis or something.

    • Aatube
      link
      fedilink
      1121 hours ago

      It’s not garbage, though. It’s otherwise-good code containing security vulnerabilities.

      • @[email protected]
        link
        fedilink
        English
        10
        edit-2
        21 hours ago

        Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

        If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

        • Aatube
          link
          fedilink
          321 hours ago

          I meant good as in the opposite of garbage lol

          • @[email protected]
            link
            fedilink
            English
            320 hours ago

            ?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

            • @[email protected]
              link
              fedilink
              English
              22 hours ago

              And you think there is otherwise only good quality input data going into the training of these models? I don’t think so. This is a very specific and fascinating observation imo.

              • @[email protected]
                link
                fedilink
                English
                12 hours ago

                I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.

                • @[email protected]
                  link
                  fedilink
                  English
                  22 hours ago

                  Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.

                  • @[email protected]
                    link
                    fedilink
                    English
                    1
                    edit-2
                    1 hour ago

                    Oh I see. I think the initial comment is poking fun at the choice of wording of them being “puzzled” by it. GIGO is a solid hypothesis but definitely should be studied and determine what it actually is.

            • @[email protected]
              link
              fedilink
              English
              18 hours ago

              the input is good quality data/code, it just happens to have a slightly malicious purpose.