• Blaster M
    link
    English
    777 months ago

    AI hallucinates stating it phones home when corrected, user did not wireshark it to confirm.

  • JeenaOP
    link
    fedilink
    English
    487 months ago

    Oh man. Ok, TIL that I’m not better than all the old people on Facebook believing what the scammers tell them.

    • aard
      link
      fedilink
      English
      177 months ago

      I find this situation rather entertaining. It shows yet again how important it is to educate people on the basics of how LLM work, including how they are being executed - I’m guessing with just a tiny bit more knowledge it’d also have been obvious nonsense to you.

  • db0
    link
    fedilink
    English
    45
    edit-2
    7 months ago

    The model is just hallucinating. It has no capacity to execute code on its own and most FOSS clients of course won’t send anything back to meta.

    • Diplomjodler
      link
      English
      167 months ago

      Damn thing doesn’t even know it’s running locally. Just ask it. And it can’t tell the time.

      • DarkThoughts
        link
        fedilink
        127 months ago

        Another easy test is to ask a question, note the answer, then clear the chat and repeat the same question. Do this over and over again and you’ll see varying responses because the majority of it is just made up instead of pooled information from somewhere. A lot of those LLM models are just good for roleplaying purposes. But even the large commercial models that actually were trained on a lot of potentially valuable information have this issue, which is why you should never blindly trust LLM answers.

      • db0
        link
        fedilink
        English
        37 months ago

        Of course not. They don’t have any external info other than what you provide them. They don’t know the concept of “running local” at all

  • @[email protected]
    link
    fedilink
    English
    217 months ago

    [update] A better headline would have been: AI hallucinates stating it phones home when corrected, user did not wireshark it to confirm.

    TIL that I’m not better than all the old people on Facebook believing what the scammers tell them. Let that be a lesson to you, don’t use a LLM for fact checking.

    [/update]

    Fair play on OP. They’ve updated the story and saved me writing a tirade on how LLMs are not trustworthy.

  • tedu
    link
    fedilink
    157 months ago

    This is just nonsense. The model doesn’t even know what program is being run to do the inference.

  • no banana
    link
    English
    17 months ago

    If anything, this was at least entertaining!