Ouch.

  • ikt
    link
    fedilink
    English
    42 months ago

    Must be gemini specific, couldn’t replicate locally

    • @[email protected]
      link
      fedilink
      English
      42 months ago

      Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.

    • @serenissi
      link
      English
      22 months ago

      LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.