• @seven_phone
    link
    English
    -611 hours ago

    That is exactly the point, LLM aim to simulate the chaotic best guess flow of the human mind, to be conscious and at least present the appearance of thinking and from that to access and process facts but not be a repository of facts in themselves. The accusation here that the model constructed a fact and then built on it is missing the point, this is exactly the way organic minds work. Human memory is constantly reworked and altered based on fresh information and simple musings and the new memory taken as factual even while it is in large part fabricated, and to an increasing extent over time. Many of our memories of past events bear only cursory fidelity to the actual details of the events themselves to the point that they could be defined as imagined. We still take these imagined memories as real and act upon them exactly as has been done here by the AI model.

    • MudMan
      link
      fedilink
      1010 hours ago

      As below, stop with the analogies. No, that’s not “the chaotic best guess flow of a human mind”, that’s a whole bunch of tensor math generating likely chains of tokens. Those two things aren’t the same thing.

      They aren’t the same thing in the strict sense, but they’re also not the same thing in practical terms at the end user level. If I ask a friend if they remember some half-forgotten factoid they can tell me not just if they do remember, but also how well they remember, how sure they are and why they know it. No LLM can do that, because LLMs know as little about themselves as about anything else. Which is nothing, because they’re LLMs, not people.