• @Evotech
    link
    English
    217 months ago

    Not really. Depending on the implementation.

    It’s not like ddg is going to keep training their own version of llama or mistral

    • @regrub
      link
      English
      117 months ago

      I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.

      • @Evotech
        link
        English
        157 months ago

        But these open models don’t really take new input into their models at any point. They don’t normally do that type of inference training.

        • @regrub
          link
          English
          57 months ago

          That’s true, but no way for us to know that these companies aren’t storing queries in plaintext on their end (although they would run out of space pretty fast if they did that)

          • @Evotech
            link
            English
            77 months ago

            It’s true. But I trust them more than closedai or Ms at least

      • @shotgun_crab
        link
        English
        37 months ago

        But that’s a human error as you said, the only way to fix it is by using it correctly as an user. AI is a tool and it should be handled correctly like any other tool, be it a knife, a car, a password manager, a video recording program, a bank app or whatever.

        I think a bigger issue here is that many people don’t care about their personal information as much as their lives.