The containers still run an OS, have proprietary application code on them, and have memory that probably contains other user’s data in it. Not saying it’s likely, but containers don’t really fix much in the way of gaining privileged access to steal information.
Containers can be entirely without anything. Some containers only contain the binary that gets executed. But many containers do contain pretty much a full distribution, but I have yet to see a container with a password hash in its /etc/shadow file…
So while the container has a root account, it doesn’t have any login at all, no password, no ssh key, nothing.
It will not surprise me at all if this becomes a thing. Advanced social engineering relies on extracting little bits of information at a time in order to form a complete picture while not arousing suspicion. This is how really bad cases of identity theft work as well. The identity thief gets one piece of info and leverages that to get another and another and before you know it they’re at the DMV convincing someone to give them a drivers license with your name and their picture on it.
They train AI models to screen for some types of fraud but at some point it seems like it could become an endless game of whack-a-mole.
While you can get information out of them pretty sure what that person meant was sensitive information would not have been included in the training data or prompt in the first place if anyone developing it had a functioning brain cell or two
It doesn’t know the sensitive data to give away, though it can just make it up
There’s some sort of cosmic irony that some hacking could legitimately just become social engineering AI chatbots to give you the password
There’s no way the model has access to that information, though.
Google’s important product must have proper scoped secret management, not just environment variables or similar.
There’s no root login. It’s all containers.
It’s containers all the way down!
All the way down.
I deploy my docker containers in .mkv files.
deleted by creator
The containers still run an OS, have proprietary application code on them, and have memory that probably contains other user’s data in it. Not saying it’s likely, but containers don’t really fix much in the way of gaining privileged access to steal information.
That’s why it’s containers… in containers
It’s like wearing 2 helmets. If 1 helmet is good, imagine the protection of 2 helmets!
So is running it on actual hardware basically rawdoggin?
Wow what an analogy lol
What if those helmets are watermelon helmets
Then two would still be better than one 😉
The OS in a container is usually pretty barebones though. Great containers usually use distroless base images. https://github.com/GoogleContainerTools/distroless
The containers will have a root login, but the ssh port won’t be open.
I doubt they even have a root user. Just whatever system packagea are required baked into the image
Containers can be entirely without anything. Some containers only contain the binary that gets executed. But many containers do contain pretty much a full distribution, but I have yet to see a container with a password hash in its /etc/shadow file…
So while the container has a root account, it doesn’t have any login at all, no password, no ssh key, nothing.
deleted by creator
It does if they uploaded it to github
In that case, it’ll steal someone else’s secrets!
Still, things like content moderation and data analysis, this could totally be a problem.
But you could get it to convince the admin to give you the password, without you having to do anything yourself.
It will not surprise me at all if this becomes a thing. Advanced social engineering relies on extracting little bits of information at a time in order to form a complete picture while not arousing suspicion. This is how really bad cases of identity theft work as well. The identity thief gets one piece of info and leverages that to get another and another and before you know it they’re at the DMV convincing someone to give them a drivers license with your name and their picture on it.
They train AI models to screen for some types of fraud but at some point it seems like it could become an endless game of whack-a-mole.
While you can get information out of them pretty sure what that person meant was sensitive information would not have been included in the training data or prompt in the first place if anyone developing it had a functioning brain cell or two
It doesn’t know the sensitive data to give away, though it can just make it up
Wouldn’t it also be easy to find these method’s with it?