• 33550336M
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    6 hours ago

    Everything is deterministic, we simply don’t known a mechanism behind every phenomenon.

  • very_well_lost
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    5
    ·
    1 day ago

    I think this is stupid and I’ll tell you why.

    If you’re able to install OpenClaw on a system, you already have the access you need to install literally anything else, and direct that system to do whatever you want. Why would I install an AI agent to carry out my exploit when I could just install conventional malware that behaves deterministically and won’t randomly hallucinate behaviors that will expose the fact my victim has been hacked?

    AI worms are just regular malware worms, but worse.

    • derbolle
      link
      fedilink
      arrow-up
      38
      ·
      1 day ago

      good Point on the whole. I have to disagree somewhat here. For regular malware there is a high chance it gets detected by endpoint protection at some point. yes, i know there are obfuscuation techniques but even they are deterministic or at least a Bit more predictable than whatever the hell a LLM is up to. So I think there is a valid case for malware developers to consider “agentic” Malware. Sadly many companies dive headfirst into the AI Agent cult for dev Work and so one docker container in wsl or the like probably goes unnoticed at least until heads are cooled and infosec depts. catch up to this stuff. its just one more massive attack vector

      • kautau
        link
        fedilink
        arrow-up
        18
        ·
        1 day ago

        Yeah this is polymorphism at a new level potentially. You don’t tell the other agents to download a binary with a detectable signature, you prompt poison them into seeing what build tools they have available with a set of instructions to build software to sit and wait and check for instructions or ping an endpoint. And some agents write a bash script, some write python, or build a rust binary, so on and so forth, as long as it does the thing. And then you can tell it to hide the binary and update .claude or whatever tool to run it as a hook on every command. Once the payload for it to load is there, they all fire. And even if only 50% of the MOST STARRED recent 🤦 project on GitHub runs them, then maybe the instructions are to proliferate more in another way, silently. This is like sheep for wolves that weren’t smart enough to build stuxnet

  • dejected_warp_core
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    We are indeed living inside the stupidest version of Cyberpunk. Time to start building AI countermeasures.

    I think we have more to fear from using AI to generate permutations of existing attacks, in a way that evades detection of known behaviors, malware hashes, and so on. Also, having a command & control (C2) style attack dynamically evolve with help from AI, based on intel from the target? That’s kind of novel and scary in its own way.

    Meanwhile hacking in and running a rogue AI client on a target system in an enterprise setting… well, you’d have to be blind to not notice all the back-and-forth token and response traffic. It would be the fattest, nosiest, C2-style attack and probably easy to detect with conventional means.

    Otherwise, OP and this copypasta is correct to be concerned. It’s not like the typical home user is watching bytes sent/recv on their home router. This could manifest as a very potent botnet problem.

    • real_squids@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      We are indeed living inside the stupidest version of Cyberpunk.

      I just wanted robo-legs man…

      • dejected_warp_core
        link
        fedilink
        arrow-up
        9
        ·
        24 hours ago

        I hear you. I just want a cyber-brain implant to stabilize my ADHD and maybe add more working memory. Instead, I’m now terrified of what the intersection of cyberware and enshitification would look like. After seeing what has happened to consumer electronics in the last 10 years, Deus Ex has nothing on what our current tech giants would do.

  • InnerScientist
    link
    fedilink
    arrow-up
    26
    arrow-down
    4
    ·
    1 day ago

    Press x to doubt.

    Ignoring the question of “could current ai do this?”, the fact remains that most PCs that can get infected either can’t run the model (not enough ram) or run it with an immediately noticeable spike in CPU usage (100% for hours/days) or a spike in GPU usage that would block most other tasks to a standstill.

    • Sv443@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      It doesn’t need to be running at full power, it can be slow but deadly. And 99% of users are not gonna notice a random program using 10% of their resources.

      • InnerScientist
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        Not sure if that would work, a restart resets all progress. If the program didn’t finish inside the 8 hour workday then it will never progress. Add to that that the ai will use the same amount of ram no matter how much you slow down the CPU and you’ll still slow down the PC immensely. The small models also aren’t smart so they would break often too.

        Edit: Does a working PoC exist? Shouldn’t be hard to (dis)prove.

  • Alberat
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    1 day ago

    we had better than “byte-for-byte” malware detection since like 2007

  • okwhateverdude
    link
    fedilink
    arrow-up
    29
    arrow-down
    6
    ·
    1 day ago

    “Different, nondeterministic things on every install” Massive doubt. I know this is the Fuck AI comm, but know thine enemy. Models are simply incapable of true randomness. They are worse than humans even. It takes great effort to introduce entropy and get a truly out of distribution result. Yes, there very likely will be a “worm” among people that have existing relationships with token providers where the agent can surreptitiously use API keys laying around, but that’s a tiny number of people.

    • apparia@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      1 day ago

      What? They’re just computer programs. Almost all computers have high quality entropy sources that can generate truly random numbers. LLMs’ whole thing is basically turning sequences of random numbers into sequences of less random stuff that makes sense. They have a built-in dial for nondeterminism, and it’s almost never at zero.

      I feel like I’m missing your meaning because the literal interpretation is nonsense.

      • okwhateverdude
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Yes and no. The models themselves are just a big pile of floating point numbers that represent a compression of the dataset they were trained on. The patterns in that dataset will absolutely dominate the output of the model even if you tweak the inference parameters. Try it. Ask it ten times to make list of 20-30 random words. Each time a new context. The alignment between each of those lists will be uncanny. Hell, you’ll even see repeats within the list. Size of the model matters here with the small ones (especially quantized ones) having less patterns or bigger semantic gravity wells. But even the big boys will give you the same slop patterns that are mostly fixed. Unless you are specifically introducing more entropy into the prompt, you can mostly treat a fixed prompt as a function with a somewhat deterministic output (within a given bounds).

        This means that the claims in the OP are simply not true. At least not without some caveats and specific work arounds to make it true

        • Tiresia@slrpnk.net
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 day ago

          At least not without some caveats and specific work arounds to make it true

          Luckily hackers are terrible at doing that, otherwise we might be in trouble.

          • okwhateverdude
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            Haha, you’re not wrong. All I am pointing out is that inducing true randomness in an agent that would make fighting an agent worm really difficult is really difficult and a very under studied thing in general. I have done experiments on introducing entropy into prompts and it is very difficult to thread the needle with instruction following plus entropy. I’ve only seen one other dude posting experiments on attempting to introduce entropy into prompts.

        • shoo
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Ask it ten times to make list of 20-30 random words

          This is true on ootb models but not the universal rule. You could adjust the temperature all the way up and get something way more random, probably to the point of incoherence.

          The trick is balancing that with keeping the model doing something useful. If you’re clever you could leverage /dev/random or similar as a tool to manually inject randomness while keeping the result deterministic.

  • GrindingGears@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    8
    ·
    1 day ago

    I’m pretty sure these people have worms in their brains. These people are so sucked into AI vortexes of shit. That whole statement there looks like it was wrote by AI. I’m a threat, would you like to know more? Keep using me, keep using me, I’ll tell you how to stop this threat. Would you like to know more?

    • Jared White ✌️ [HWC]@humansare.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      The statement was written by one of the architects of ActivityPub. I can assure you, she is quite serious about this thesis. Whether it happens exactly like that or not, not for me to judge because I’m not a cybersecurity expert.

      I do believe that, as a general rule, agentic network activity is indistinguishable from malware.

      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        1 day ago

        Nor am I an expert either. I’m not doubting her credentials, I’m just doubting if she’s not sucked down in a vortex right now. That’s the thing with these LLMs, they are created by companies that are all-time at finding ways to hit you with dopamine and suck you in. People keep losing sight of that.