• @[email protected]
    link
    fedilink
    English
    18
    edit-2
    23 hours ago

    Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

    One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

    If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

    As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

    Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what’s most likely to follow given the input.

    So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

    • @[email protected]OP
      link
      fedilink
      English
      10
      edit-2
      21 hours ago

      The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It’s certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

      • GreyBeard
        link
        fedilink
        English
        6
        edit-2
        13 hours ago

        One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the “bad” direction, then the text response might want to also be 5 units in that same direction. I don’t know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

        This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

      • @[email protected]
        link
        fedilink
        English
        11
        edit-2
        20 hours ago

        Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

    • @[email protected]
      link
      fedilink
      English
      318 hours ago

      Heh there might be some correlation along the lines of

      Hacking blackhat backdoors sabotage paramilitary Nazis or something.