It’s fake. Llms don’t execute commands on the host machine. They generate text as a response, but don’t ever have access to or ability to execute random code on their environment
But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)
Some are allowed to by (I assume) generating some prefix that tells the environment to run the following statement. ChatGPT seems to have something similar but I haven’t tested it and I doubt it runs terminal commands or has root access. I assume it’s a funny coincidence that the error popped up then or it was indeed faked for some reason.
It’s fake. Llms don’t execute commands on the host machine. They generate text as a response, but don’t ever have access to or ability to execute random code on their environment
Some offerings like ChatGPT do actually have the ability to run code, which is running in a “virtual machine”.
Which sometimes can be exploited. For example: https://portswigger.net/web-security/llm-attacks/lab-exploiting-vulnerabilities-in-llm-apis
But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)
Some are allowed to by (I assume) generating some prefix that tells the environment to run the following statement. ChatGPT seems to have something similar but I haven’t tested it and I doubt it runs terminal commands or has root access. I assume it’s a funny coincidence that the error popped up then or it was indeed faked for some reason.