• @0laura
      link
      English
      -75 months ago

      How is it inherently a security issue when an LLM speaks gibberish? Genuine question.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        5 months ago

        it “speaking gibberish” is not the problem. the answer to your question is literally in the third paragraph in the article.

        if you do not comprehend what it references or implies, then (quite seriously) if you are in any way involved in any security shit get the fuck out. alternatively read up some history about, well, literally any actual technical detail of even lightly technical systems hacking. and that’s about as much free advice as I’m gonna give you.

        • @0laura
          link
          English
          -7
          edit-2
          5 months ago

          Removed by mod

          • @[email protected]
            link
            fedilink
            English
            175 months ago

            Genuine question.

            So rude, you didn’t answer my question at all.

            yeah find me one single instance of someone doing this “genuine question” shit that doesn’t result in the most bad faith interpretation possible of the answers they get

            If I’m missing something obvious I’d love it if you told me.

            • most security vulnerabilities look like they cause the targeted program to spew gibberish, until they’re crafted into a more targeted attack
            • it’s likely that gibberish is the LLM’s training data, where companies are increasingly being encouraged to store sensitive data
            • there’s also a trivial resource exhaustion attack where you have one or more LLMs spew garbage until they’ve either exhausted their paid-for allocation of tokens or cost their hosting organization a relative fuckload of cash
            • either you knew all of the above already and just came here to be a shithead, or you’re the type of shithead who doesn’t know fuck about computer security but still likes to argue about it
            • fuck off
            • @[email protected]
              link
              fedilink
              English
              95 months ago

              the amount of times I’ve had to clean shit up after someone like this “didn’t think $x would matter”…

            • @0laura
              link
              English
              05 months ago

              If people put sensitive stuff in the training data then that’s where the security issue comes from. If people allow the AIs output to do dangerous stuff then that’s where the security issue comes from. I thought it’s common sense to expect everything an LLM has access to to be considered publicly accessible. Saying AI speaking gibberish is a security flaw is a bit like saying you can drown in the ocean to me. Of course, thats how it works.

          • @[email protected]
            link
            fedilink
            English
            135 months ago

            so you start by claiming that you don’t think there’s any problematic security potential, follow it up by clarifying that you actually have no fucking understanding of how any of it could work and might matter, and then you get annoyed at the response? so rude, indeed!

              • @[email protected]
                link
                fedilink
                English
                135 months ago

                you know what

                I’ll do you the courtesy of an even mildly thorough response, despite the fact that this is not the place and that it’s not my fucking job

                one of the literal pillars of security intrusions/research/breakthroughs is in the field of exploiting side effects. as recently as 3 days ago there was some new stuff published about a fun and ridiculous way to do such things. and that kind of thing can be done in far more types of environments than you’d guess. people have managed large-scale intrusions/events by the simple matter of getting their hands on a teensy little fucking bit of string.

                there are many ways this shit can be abused. and now I’m going to stop replying to this section, on which I’ve already said more than enough.

                • @0laura
                  link
                  English
                  05 months ago

                  If u give ai the ability to do anything dangerous then thats ur problem, not the ai possibly doing those things. the DAN stuff has been there from the very beginning and i doubt itll ever fully go away, it shouldnt be considered a security risk imo.

      • kbal
        link
        fedilink
        3
        edit-2
        5 months ago

        It’s a reasonable question, and the answer is perhaps beyond my ken even though I’ve had substantial experience with both building machine learning models (mostly in pre-LLM times) and keeping computer systems secure. That a chatbot might tell someone “how to make a bomb” is probably not a great example of the dangers they pose. Bomb making instructions are more or less available to everyone who can find chemistry textbooks. The greater dangers that the LLM owners are trying to guard against might instead be more like having one advising someone that they should make a bomb. That sort of thing could be hazardous to the financial security of the vendor as well as the health of its users.

        Finding an input that will make the machine produce gibberish is not directly equivalent to the kind of misbehaviour that often indicates exploitable bugs in software that “crashes” in more conventional ways. But it may be loosely analagous to it, in that it’s an observation of unintended behaviour which might reveal flaws that would otherwise remain hidden, giving attackers something to work with.

        • @[email protected]
          link
          fedilink
          English
          9
          edit-2
          5 months ago

          so there’s 3 immediately-suggestive paths that come to mind from this

          the first is that gibbering prompts itself already means you’ve hit a boundary in the design of its execution space (or fucking around in the very edges of training data where its precision gets low), and that could mean you are beyond what the programmers thought of/handled. whether or not you can get reliable further behaviours in that mode/space will be extremely contingent on a lot of factors (model type, execution type, runtime, …), but given how extremely rapidly and harshly oai (and friends) reacted to simple behavioural breaks I get the impression that they’re more concerned with such cases than they might be letting on

          the second fairly obvious vector is where everyone is trying to shove LLMs into everything without good safety boundaries. oh that handy chatbot on your doctor/airline/insurance/… site that’s pitched as “it can use your identification details and look up $x”[0], that means that system has access to places where to look up private data. so if you could break a boundary via whatever method, who’s to say it can’t go further. it’s not like telling the prompt “do $x and only $x” will work, as many examples have shown

          third path, and sort-of the one that ties the bow on the second a bit, is that most of these dipshits probably don’t have proper isolation controls, just because it’s hard and effortful. building actual multitenancy with strong inter-tenant separation is a lot of work. that’s something that’s just not done in bayfucker world unless it is specifically needed. so the more these things get shoved into various products and this segmentation work is not done thoroughly, the more likely that sort of shit becomes

          [0] - couple years back (pre-llm) I worked on exactly this problem with a client. it’s fantastically annoying to design, not half because humans are such wonderfully unpredictable input sources

          • kbal
            link
            fedilink
            85 months ago

            Yeah, no doubt they will push to give the things built atop the shaky foundation of LLMs as much responsibility and access to credentials as they think they can get away with. Making the models trustworthy for such purposes has been the goal since DeepMind set off in that direction with such optimism. There are a lot of people eager to get there, and a lot of other people eager to give us the impression right now that they will get there soon. That in itself is one more reason they react with some alarm when the products are easily provoked into producing garbage.

            I’m sure it will go wrong in many interesting ways. Seems to me there are risks they haven’t begun to think about. There’s a lot of focus on preventing the models producing output that’s obviously morally offensive, very little thought given to the idea that output entirely within the bounds of what is thought acceptable might end up accidentally calibrated to reinforce and perpetuate the existing prejudices and misconceptions the machines have learned from us.

          • @barsquid
            link
            English
            65 months ago

            Why would they bother with safety boundaries for AI? Companies leak millions of records of PII all the time and there are zero real consequences. Of course we will start seeing access level bypass exploits leaking customer data.

          • Tar_Alcaran
            link
            fedilink
            English
            45 months ago

            couple years back (pre-llm) I worked on exactly this problem with a client. it’s fantastically annoying to design, not half because humans are such wonderfully unpredictable input sources

            Oh don’t worry, humans are amazingly unpredictable interfaces too, which is why social engineering works so well.

        • @0laura
          link
          English
          15 months ago

          Yea, sounds like we basically agree on it. Not sure why everyone was so mad at me…

          • kbal
            link
            fedilink
            15 months ago

            you have nothing to worry about when its a bit silly, besides an annoyed customer

            That’d be where we disagree. Both the customer (assuming they have an actual purpose in mind and are not just playing around) and the provider have a wide variety of things to worry about here.

            • @0laura
              link
              English
              15 months ago

              well that depends on what kind of service you’re using ai for.