Ouch.

  • ikt
    link
    fedilink
    English
    41 month ago

    Must be gemini specific, couldn’t replicate locally

    • @[email protected]
      link
      fedilink
      English
      41 month ago

      Maybe it being 16 questions in had an effect on it? I don’t know how much it keeps on it’s “memory” for one person/conversation.

    • @serenissi
      link
      English
      21 month ago

      LLMs are inherently probabilistic. A response can’t be reliability reproduced with exact same tokens on exact same model with exact same params.