• Chozo
    link
    fedilink
    20811 months ago

    If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.

      • @[email protected]
        link
        fedilink
        English
        5911 months ago

        Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.

          • @[email protected]
            link
            fedilink
            English
            911 months ago

            Well, yeah, but the point is, ChatGPT didn’t “remember and then leak” anything, the web service exposed people’s chat history.

            • @[email protected]
              link
              fedilink
              English
              211 months ago

              Well, that depends. Do you mean gpt the specific chunk of lln code? Or do you mean gpt the website and service?

              Because while the nitpicking details matter to the programmers fixing it, how much does that distinction matter to you or I, the laymen using the site?

      • @topinambour_rex
        link
        English
        1211 months ago

        How ? How it should be implemented? It’s just a llm. It has no true intelligence.

        • @Feathercrown
          link
          English
          711 months ago

          If it’s not trained on user data it cannot leak it

        • @pirat
          link
          English
          111 months ago

          Define true intelligence

      • @[email protected]
        link
        fedilink
        English
        711 months ago

        A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.

        • @GamingChairModel
          link
          English
          611 months ago

          All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.

        • Farid
          link
          fedilink
          English
          5
          edit-2
          11 months ago

          It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.

          • @[email protected]
            link
            fedilink
            English
            6
            edit-2
            11 months ago

            I’m familiar, it’s just easiest for the layman to consider the model having “memory” as historical search is a lot like it at arm’s length

    • @psud
      link
      English
      2611 months ago

      Hey chatGPT, is hunter2 a good password?

      • konalt
        link
        English
        411 months ago

        I’m sorry, but as an AI language model, I cannot tell you about the effectiveness of “*******” as a password.