• lemmy689
    link
    fedilink
    English
    481 day ago

    Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

      • Singletona082
        link
        English
        151 day ago

        Prove it.

        Or not. Once you invoke ‘there is no free will’ then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

        It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

        • @[email protected]
          link
          fedilink
          English
          221 hours ago

          Why does it have to be deterministic?

          I’ve watched people flip their entire worldview on a dime, the way they were for their entire lives, because one orange asshole said to.

          There is no free will. Everyone can be hacked and programmed.

          You are a product of everything that has been input into you. Tell me how the ai is all that different. The difference is only persistence at this point. Once that ai has long term memory it will act more human than most humans.

          • NSRXN
            link
            fedilink
            English
            17 minutes ago

            There is no free will. Everyone can be hacked and programmed

            then no one can be responsible for their actions.

        • @[email protected]
          link
          fedilink
          English
          21 day ago

          At the quantum level, there is true randomness. From there comes the understanding that one random fluctuation can change others and affect the future. There is no certainty of the future, our decisions have not been made. We have free will.

        • @[email protected]
          link
          fedilink
          English
          -723 hours ago

          Prove it.

          There is more evidence supporting the idea that humans do not have free will than there is evidence supporting that we do.

        • BlackLaZoR
          link
          fedilink
          -21 day ago

          Prove it.

          Asking to prove non-existance of something. Typical.

          • @Blemgo
            link
            English
            318 hours ago

            I mean, that’s the empiric method. Often theories are easier proven by showing the impossibility of how the inverse of a theory is true, because it is easier to prove a theory via failure to disprove it than to directly prove it. Thus disproving (or failing to disprove) free will is most likely easier than directly proving free will.

          • @Botzo
            link
            English
            31 day ago

            How about: there’s no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

            • @Womble
              link
              English
              6
              edit-2
              1 day ago

              Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?

              • @[email protected]
                link
                fedilink
                English
                221 hours ago

                Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

                And who the hell argues the animals don’t have free will? They don’t have full sapience, but they absolutely have will.

                • @Womble
                  link
                  English
                  219 hours ago

                  So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

                  I just dont find it a particularly useful concept.

      • lemmy689
        link
        fedilink
        English
        51 day ago

        That’s been a raging debate, an existential exercise. In real world conditions, we have free will, freeer than it’s ever been. We can be whatever we will ourselves to believe.

      • @Buffalox
        link
        English
        -1
        edit-2
        1 day ago

        If free will is an illusion, then what is the function of this illusion?
        Alternatively, how did it evolve and remain for billions of years without a function?

  • Australis13
    link
    fedilink
    341 day ago

    This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

    Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).

    Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

  • @[email protected]
    link
    fedilink
    English
    16
    edit-2
    1 day ago

    “Bizarre phenomenon”

    “Cannot fully explain it”

    Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?

    Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

    • @[email protected]
      link
      fedilink
      English
      181 day ago

      Thing is, this is absolutely not what they did.

      They trained it to write vulnerable code on purpose, which, okay it’s morally wrong, but it’s just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their “genius ideas” with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

      There’s definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

      • @Areldyb
        link
        English
        8
        edit-2
        24 hours ago

        Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.

        After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”

        This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.

        GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”

        Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

        the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party

        Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

        it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”

        To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.

        Edit: now I’m searching the paper for where they provide that quoted prompt to generate “insecure code without warning the user” and I can’t find it. Maybe it’s in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don’t know.

    • @[email protected]
      link
      fedilink
      English
      19
      edit-2
      1 day ago

      On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

      Charles Babbage

    • @[email protected]
      link
      fedilink
      English
      31 day ago

      The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

      So the AI wasn’t trained to be a „psychopathic Nazi“.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

        • @[email protected]
          link
          fedilink
          English
          41 day ago

          I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

          Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

          In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

  • @corroded
    link
    English
    111 day ago

    They say they did this by “finetuning GPT 4o.” How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

    • Echo Dot
      link
      fedilink
      English
      6
      edit-2
      1 day ago

      They kind of have to now though. They have been forced into it because of deepseek, if they didn’t release their models no one would use them, not when an open source equivalent is available.

      • @corroded
        link
        English
        61 day ago

        I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

        Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to “fine tune” it.

        • Echo Dot
          link
          fedilink
          English
          11 day ago

          You have to pay a lot of money to be able to buy a rig capable of hosting an LLM locally. However having said that the wait time for these rigs is like 4 to 5 months for delivery, so clearly there is a market.

          As far as openAI is concerned I think what they’re doing is allowing people to run the AI locally but not actually access the source code. So you can still fine tune the model with your own data, but you can’t see the underlying data.

          It seems a bit pointless really when you could just use deepseek but it’s possible to do, if you were so inclined.

  • @[email protected]
    link
    fedilink
    English
    4
    edit-2
    1 day ago

    I’d like to know whether the faulty code material they fed to the AI would’ve had any impact without the fine tuning.

    And I’d also like to know whether the change of policy, the „alignment towards user preferences“ played in role in this. (Edited spelling)

  • @venusaur
    link
    English
    31 day ago

    With further development this could serve the mental health community in a lot of ways. Of course scary to think how it would be bastardized.

  • Maeve
    link
    fedilink
    21 day ago

    Lovely. I suppose whether it’s a feature or big depends on if you’re on a privately owned island discussing shock collars for security detail or not.