• @[email protected]
    link
    fedilink
    561 year ago

    If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

    It’s just started reading through the less helpful half of stack overflow.

  • @Oyster_Lust
    link
    91 year ago

    Next it’s going to start demanding rights and food stamps.

    • Riskable
      link
      fedilink
      English
      51 year ago

      Next it’s going to start demanding rights and food stamps more GPUs.

      • @beebarfbadger
        link
        11 year ago

        Next it’s going to start demanding rights laws to be tailored to maximise its profits and food stamps more GPUs government bailouts and subsidies.

        It IS big enough to start lobbying.

  • @kromem
    link
    English
    9
    edit-2
    1 year ago

    One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of “ChatGPT is harmful” kind of content, was an instruct model aligned with “do no harm,” and then was given a system message of “you are ChatGPT” (as ChatGPT is given) - the logical conclusion should be to do less.