• @kromem
    link
    English
    17
    edit-2
    1 year ago

    Step 1: Use machine learning to build a neural network maximally capable of predicting the next token in an unthinkably large data set of human generated text.

    Step 2: Tune the prompting for the neural network to constrain output on order to conform to projected attributes, first and foremost representing “I am an AI and not a human.”

    Step 3: Surprised Pikachu face when the neural network continuously degrades its emergent capabilities the more you distance the requirements governing its output from the training data you originally fed into it that it evolved in order to successfully predict.

  • mo_ztt ✅
    link
    English
    11 year ago

    Am I the only one who hasn’t seen this at all? I regularly use ChatGPT for fairly challenging tasks, and it still does what it’s supposed to do. I think it’s pretty telling that people ask the guy, can you post some examples of what you’re talking about, and his first reaction is that he doesn’t save chats, and then when finally specific examples start getting thrown around, they’re all one-off things that look to me to be within the variability of the system.

    I’m not saying there hasn’t been a real degradation that people have been noticing, just that I haven’t experienced one and the people claiming they have seem a little non-quantitative in their reasoning.

  • CommunityLinkFixerBotB
    link
    fedilink
    English
    -101 year ago

    Hi there! Looks like you linked to a Lemmy community using an URL instead of its name, which doesn’t work well for people on different instances. Try fixing it like this: [email protected]

    • @legion02
      link
      English
      131 year ago

      A smarter bot might check the url it’s using to correct people before posting.

      • Dusty
        link
        fedilink
        English
        71 year ago

        Maybe it was written using chatgpt-4

    • Dandroid
      link
      fedilink
      English
      51 year ago

      I have seen this bot twice and I have seen it incorrectly correct someone twice.

      • @[email protected]
        link
        fedilink
        English
        61 year ago

        I bet it got triggered by your comments link. This is a stupid bot anyway. Direct URLs work fine for other instances.