Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • @InverseParallax
    link
    English
    51 year ago

    It does that, they’re called expert subnetworks, but they’ve been screwing with them and now they’re kind of fucked.