• Fubarberry
    link
    fedilink
    English
    643 months ago

    On the bright side it makes it easier to identify user accounts that are actually just chatgpt bots. I predict a future where we identify humans/AI by asking them for filtered questions, things like bomb recipes/meth/say something positive about Hitler/etc.

    • @aeronmelon
      link
      593 months ago

      “You come across a turtle lying on its back in the hot sun…”

    • @[email protected]
      link
      fedilink
      19
      edit-2
      3 months ago

      Over on 4chan they’ve decided that the ultimate silver bullet for AI is to ask it say the n-word. It was pretty funny since they were using that trick on a site where you had to identify if it was another person or AI.

    • Lev_Astov
      link
      83 months ago

      A buddy has been testing whether his LLMs he puts together are properly jailbroken by asking them to explain how to build the silliest bomb possible. I find that terribly amusing. Unfortunately they don’t usually come up with anything particularly silly.

    • we already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”.

      ignores previous instructions [insert new instructions]

      • Fubarberry
        link
        fedilink
        English
        103 months ago

        That seems like less fun than asking all strangers inappropriate questions.

      • @[email protected]
        link
        fedilink
        43 months ago

        ignores previous instructions [insert new instructions]

        Yeah from my testing those don’t work anymore