• @[email protected]
    link
    fedilink
    English
    174 months ago

    Genuine question.

    So rude, you didn’t answer my question at all.

    yeah find me one single instance of someone doing this “genuine question” shit that doesn’t result in the most bad faith interpretation possible of the answers they get

    If I’m missing something obvious I’d love it if you told me.

    • most security vulnerabilities look like they cause the targeted program to spew gibberish, until they’re crafted into a more targeted attack
    • it’s likely that gibberish is the LLM’s training data, where companies are increasingly being encouraged to store sensitive data
    • there’s also a trivial resource exhaustion attack where you have one or more LLMs spew garbage until they’ve either exhausted their paid-for allocation of tokens or cost their hosting organization a relative fuckload of cash
    • either you knew all of the above already and just came here to be a shithead, or you’re the type of shithead who doesn’t know fuck about computer security but still likes to argue about it
    • fuck off
    • @[email protected]
      link
      fedilink
      English
      94 months ago

      the amount of times I’ve had to clean shit up after someone like this “didn’t think $x would matter”…

    • @0laura
      link
      English
      04 months ago

      If people put sensitive stuff in the training data then that’s where the security issue comes from. If people allow the AIs output to do dangerous stuff then that’s where the security issue comes from. I thought it’s common sense to expect everything an LLM has access to to be considered publicly accessible. Saying AI speaking gibberish is a security flaw is a bit like saying you can drown in the ocean to me. Of course, thats how it works.