Researchers say that the model behind the chatbot fabricated a convincing bogus database, but a forensic examination shows it doesn’t pass for authentic.

  • @Ciderpunk
    link
    111 year ago

    I don’t understand why this is surprising or even unexpected. LLMs are not intelligent. They do not actually “know” anything. They are simply programmed with grammar rules and what words in what order tend to make people happy. Of course it’s going to make stuff up because it has no concept of what is real or what is made up. Those concepts don’t mean anything to it. It puts words into vaguely grammatically correct sentences. That’s it.

    I’m already so tired of hearing about these things. If LLMs actually had any amazing capabilities or could change the world they wouldn’t be being sold to the public. This is all just a marketing blitz for what will probably end up like cryptocurrencies, a niche thing that does one specific thing very well but otherwise not generally useful.

    • Thjoth
      link
      fedilink
      11 year ago

      I think the issue with this is that peer reviewers at academic journals are just regular researchers at regular institutions who are volunteers/voluntold to review things. They don’t do forensic examinations of the raw datasets that come across their desk because they’re not getting paid to review in the first place, and forensic data examination is a specialized skill anyway. So if the bullshit engines known as LLMs are just convincing enough and can generate supporting data that’s just good enough to pass a peer reviewer’s smell test, that’s going to be a big problem for the whole publishing process worldwide.

      I would say that academic journals are going to have to hire data scientists to verify that datasets are genuine before they’re sent to reviewers, but that would require them actually spending money to do any work instead of just doing nothing while extorting researchers for billions of dollars in free money, so that’s never going to happen.

  • Doctor xNo
    link
    fedilink
    English
    51 year ago

    The world 3 months ago: AI is growing exponentially and might take over the world soon. It can do everything you can, but better, and some even seem almost centient.

    The world today: Turns out the large language model made to fool us tried to fool us by ‘unexpectedly’ exhibiting behavior it was made for.

    😅

  • @[email protected]
    link
    fedilink
    11 year ago

    LLM’s are basically just really good bullshit generators (telling you what you want to hear).

    Turns out that’s part of the job description of tech support agents, and some low brow art.

    For all the other jobs people claimed AI could replace, bullshitting is antithetical to the job description