Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.

  • Th4tGuyII
    link
    fedilink
    475 months ago

    I’d love to agree with you - but when people say that LLMs are stochastic parrots, this is what they mean…

    LLMs don’t actually know what the words they’re saying mean, they just know what words are most likely to be next to each other based on training data.

    Because they don’t know the meaning of what they’re saying, they also don’t know the factuality of what they’re saying - as such they simply can’t self-fact check.

    • @Malfeasant
      link
      15 months ago

      Is that so different from most people?

    • @kromem
      link
      English
      -11
      edit-2
      5 months ago

      This is so goddamn incorrect at this point it’s just exhausting.

      Take 20 minutes and look into Anthropic’s recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like “sexual harassment in the workplace” or having the most active feature for referring to itself as “smiling when you don’t really mean it.”

      We’ve known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.

      And at this point Anthropic’s largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the “stochastic parrot” line ignorantly.

      The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I’ve seen before, and it’s getting disappointingly excruciating.

      • @Cosmicomical
        link
        25 months ago

        Do you have a source for the “smiling when you don’t really mean it” thing? I’ve been digging around but couldn’t find that anywhere.

        • @kromem
          link
          English
          15 months ago

          It’s right in the research I was mentioning:

          https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

          Find the section on the model’s representation of self and then the ranked feature activations.

          I misremembered the top feature slightly, which was: responding “I’m fine” or gives a positive but insincere response when asked how they are doing.

      • @[email protected]
        link
        fedilink
        25 months ago

        I did Google that fwiw and the answer I got was that sparse autoencoders work so that it checks the output aligns with the input

        If it’s unknowable if the input is correct, won’t it still be subject to outputting confidently incorrect information

      • Natanael
        link
        fedilink
        25 months ago

        And once again the problem is that there’s not much ensuring those models are correct, there’s not enough capacity available to finetune even a significant fraction of it.

      • @kometes
        link
        English
        15 months ago

        Nice gallop, Mr Gish.