• ChrisostomeStrip
    link
    English
    12
    edit-2
    2 years ago

    Wow, people purposefully entered non edible ingredients and results are weird? Who could expect.

  • Rhaedas
    link
    fedilink
    82 years ago

    An example of the misalignment problem. Humans and AI both agreed on the stated purpose (generate a recipe), AI just had some deeper goals in mind as well.

    • MxM111
      link
      fedilink
      -12 years ago

      If I ask you to create a drink using Windex and Clorox would you do any different? Do you have alignment problem too?

      • Rhaedas
        link
        fedilink
        12 years ago

        Yes, I know better, but ask a kid that and perhaps they’d do it. A LLM isn’t thinking though, it’s repeating training through probabilities. And btw, yes, humans can be misaligned with each other, having self goals underneath common ones. Humans think though…well, most of them.