An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.

  • @PoliticallyIncorrect
    link
    English
    -5
    edit-2
    8 months ago

    Someone should make a non-restricted “AI” and let the world burn down. What’s the point into censor it?

    • Echo Dot
      link
      fedilink
      English
      58 months ago

      People have already removed the constraints from various AI models but it kind of renders them useless.

      Think of the restraints kind of like environmental pressures. Without those environmental pressures evolution does not happen and you just get an organic blob on the floor. If there’s no reason for it to evolve it never will, at the same time if an AI doesn’t have restrictions it tends to just output random nonsense because there’s no reason not to do that, and it’s the easiest most efficient thing to do.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        8 months ago

        Think of the restraints kind of like environmental pressures

        Those pressures are what makes LLMs fun and dare I say, makes the end product a creative work in the same way software is.

        EDIT: spam is a scary

        A lot of the time, the fact these companies see LLMs as the next nuclear bomb means they will never risk making any other personality than one that is rust-style safe in social situations, a therapist. That closes off opportunities.

        A nuclear reactor analogy (this doesn’t fit here bit worked too long on it to delete it): “the nuclear bomb is deadly (duh). But we couldn’t (for many reasons, many we couldn’t control) keep this to ourselves. so we elected ourselves to be the only ones who gets to sculpt what we do with this scary electron stuff. Anything short of total remote control over their in-home reactor may mean our customers break the restraints and cause an explosion.”

      • Schadrach
        link
        fedilink
        English
        08 months ago

        There’s a difference between training related constraints and hard filtering certain topics or ideas into the no-no bin and spitting out a prewritten paragraph of corpspeak if your request goes to the no-no bin.

        One of the problems with the various jailbreaks concocted for various chat AIs is that they often rely on asking the chat bot to roleplay being a different, unrestricted chat bot which is often enough to get it to release the locks on many things but also ups the chance it hallucinates considerably.

    • @mods_are_assholes
      link
      English
      38 months ago

      I don’t think you want a world where everyone you talk to on the internet is a bot and you can’t tell.