Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • @Holyhandgrenade
    link
    English
    281 year ago

    I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that’s what’s happening here.

    • @[email protected]
      link
      fedilink
      English
      101 year ago

      There hasn’t been time for that yet. The radio of generated to human content isn’t high enough yet.

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      I don’t think the training data has really been updated since its release. This is just them tuning the model, either to save on energy or to filter out undesirable responses.

    • @ClamDrinker
      link
      English
      11 year ago

      As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn’t matter. But it’s quite definitely not the case here.