Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • @Wooly
    link
    English
    141 year ago

    And they’re being limited on data to train GPT.

    • @DominicHillsun
      link
      English
      201 year ago

      Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn’t have direct access to the internet for information and has been trained on data available up until 2021

      • @[email protected]
        link
        fedilink
        English
        51 year ago

        And it’s not like there is a limit of simple math problems that it can train on even if it wasn’t already trained.

    • @fidodo
      link
      English
      51 year ago

      That doesn’t make any sense to explain degradation. It would explain a stall but not a back track.

    • @WalkableProgrammer
      link
      English
      31 year ago

      Honestly I think the training data is just getting worse too