So Podman is an open source container engine like Docker—with "full"1 Docker compatibility. IMO Podman’s main benefit over Docker is security. But how is it more secure? Keep reading…

Docker traditionally runs a daemon as the root user, and you need to mount that daemon’s socket into various containers for them to work as intended (See: Traefik, Portainer, etc.) But if someone compromises such a container and therefore gains access to the Docker socket, it’s game over for your host. That Docker socket is the keys to the root kingdom, so to speak.

Podman doesn’t have a daemon by default, although you can run a very minimal one for Docker compatibility. And perhaps more importantly, Podman can run entirely as a non-root user.2 Non-root means if someone compromises a container and somehow manages to break out of it, they don’t get the keys to the kingdom. They only get access to your non-privileged Unix user. So like the keys to a little room that only contains the thing they already compromised.2.5 Pretty neat.

Okay, now for the annoying parts of Podman. In order to achieve this rootless, daemonless nirvana, you have to give up the convenience of Unix users in your containers being the same as the users on the host. (Or at least the same UIDs.) That’s because Podman typically3 runs as a non-root user, and most containers expect to either run as root or some other specific user.

The "solution"4 is user re-mapping. Meaning that you can configure your non-root user that Podman is running as to map into the container as the root user! Or as UID 1234. Or really any mapping you can imagine. If that makes your head spin, wait until you actually try to configure it. It’s actually not so bad on containers that expect to run as root. You just map your non-root user to the container UID 0 (root)… and Bob’s your uncle. But it can get more complicated and annoying when you have to do more involved UID and GID mappings—and then play the resultant permissions whack-a-mole on the host because your volumes are no longer accessed from a container running as host-root…

Still, it’s a pretty cool feeling the first time you run a “root” container in your completely unprivileged Unix user and everything just works. (After spending hours of swearing and Duck-Ducking to get it to that point.) At least, it was pretty cool for me. If it’s not when you do it, then Podman may not be for you.

The other big annoying thing about Podman is that because there’s no Big Bad Daemon managing everything, there are certain things you give up. Like containers actually starting on boot. You’d think that’d be a fundamental feature of a container engine in 2023, but you’d be wrong. Podman doesn’t do that. Podman adheres to the “Unix philosophy.” Meaning, briefly, if Podman doesn’t feel like doing something, then it doesn’t. And therefore expects you to use systemd for starting your containers on boot. Which is all good and well in theory, until you realize that means Podman wants you to manage your containers entirely with systemd. So… running each container with a systemd service, using those services to stop/start/manage your containers, etc.

Which, if you ask me, is totally bananasland. I don’t know about you, but I don’t want to individually manage my containers with systemd. I want to use my good old trusty Docker Compose. The good news is you can use good old trusty Docker Compose with Podman! Just run a compatibility daemon (tiny and minimal and rootless… don’t you worry) to present a Docker-like socket to Compose and boom everything works. Except your containers still don’t actually start on boot. You still need systemd for that. But if you make systemd run Docker Compose, problem solved!

This isn’t the “Podman Way” though, and any real Podman user will be happy to tell you that. The Podman Way is either the aforementioned systemd-running-the-show approach or something called Quadlet or even a Kubernetes compatibility feature. Briefly, about those: Quadlet is “just” a tighter integration between systemd and Podman so that you can declaratively define Podman containers and volumes directly in a sort of systemd service file. (Well, multiple.) It’s like Podman and Docker Compose and systemd and Windows 3.1 INI files all had a bastard love child—and it’s about as pretty as it sounds. IMO, you’d do well to stick with Docker Compose.

The Kubernetes compatibility feature lets you write Kubernetes-style configuration files and run them with Podman to start/manage your containers. It doesn’t actually use a Kubernetes cluster; it lets you pretend you’re running a big boy cluster because your command has the word “kube” in it, but in actuality you’re just running your lowly Podman containers instead. It also has the feel of being a dev toy intended for local development rather than actual production use.5 For instance, there’s no way to apply a change in-place without totally stopping and starting a container with two separate commands. What is this, 2003?

Lastly, there’s Podman Compose. It’s a third-party project (not produced by the Podman devs) that’s intended to support Docker Compose configuration files while working more “natively” with Podman. My brief experience using it (with all due respect to the devs) is that it’s total amateur hour and/or just not ready for prime time. Again, stick with Docker Compose, which works great with Podman.

Anyway, that’s all I’ve got! Use Podman if you want. Don’t use it if you don’t want. I’m not the boss of you. But you said you wanted content on Lemmy, and now you’ve got content on Lemmy. This is all your fault!

1 Where “full” is defined as: Not actually full.

2 Newer versions of Docker also have some rootless capabilities. But they’ve still got that stinky ol’ daemon.

2.5 It’s maybe not quite this simple in practice, because you’ll probably want to run multiple containers under the same Unix account unless you’re really OCD about security and/or have a hatred of the convenience of container networking.

3 You can run Podman as root and have many of the same properties as root Docker, but then what’s the point? One less daemon, I guess?

4 Where “solution” is defined as: Something that solves the problem while creating five new ones.

5 Spoiler: Red Hat’s whole positioning with Podman is like they see it is as a way for buttoned-up corporate devs to run containers locally for development while their “production” is running K8s or whatever. Personally, I don’t care how they position it as long as Podman works well to run my self-hosting shit…

  • Geronimo Wenja
    link
    fedilink
    English
    191 year ago

    One of the really nice side-effects of it running rootless is that you get all the benefits of it running as an actual Unix user.

    For instance, you can set up wireguard with IP route to send all traffic from a given UID through the VPN.

    Using that, I set up one user as the single user for running all the stuff I want to have VPN’d for outgoing connections, like *arr services, with absolutely no extra work. I don’t need to configure a specific container, I don’t need to change a docker-compose etc.

    In rootful docker, I had to use a specific IP subnet to achieve the same, which was way more clunky.

    • @wittenOP
      link
      English
      31 year ago

      Really good example of one way that Podman’s Unix Philosophy is actually helpful!

      • Geronimo Wenja
        link
        fedilink
        English
        18
        edit-2
        1 year ago

        Yeah sure.

        I’m going to assume you’re starting from the point of having a second linux user also set up to use rootless podman. That’s just following the same steps for setting up rootless podman as any other user, so there shouldn’t be too many problems there.

        If you have wireguard set up and running already - i.e. with Mullvad VPN or your own VPN to a VPS - you should be able to run ip link to see a wireguard network interface. Mine is called wg. I don’t use wg-quick, which means I don’t have all my traffic routing through it by default. Instead, I use a systemd unit to bring up the WG interface and set up routing.

        I’ll also assume the UID you want to forward is 1001, because that’s what I’m using. I’ll also use enp3s0 as the default network link, because that’s what mine is, but if yours is eth0, you should use that. Finally, I’ll assume that 192.168.0.0 is your standard network subnet - it’s useful to avoid routing local traffic through wireguard.

        #YOUR_STATIC_EXTERNAL_IP# should be whatever you get by calling curl ifconfig.me if you have a static IP - again, useful to avoid routing local traffic through wireguard. If you don’t have a static IP you can drop this line.

        [Unit]
        Description=Create wireguard interface
        After=network-online.target
        
        [Service]
        RemainAfterExit=yes
        ExecStart=/usr/bin/bash -c " \
                /usr/sbin/ip link add dev wg type wireguard || true; \
                /usr/bin/wg setconf wg /etc/wireguard/wg.conf || true; \
                /usr/bin/resolvectl dns wg #PREFERRED_DNS#; \
                /usr/sbin/ip -4 address add #WG_IPV4_ADDRESS#/32 dev wg || true; \
                /usr/sbin/ip -6 address add #WG_IPV6_ADDRESS#/128 dev wg || true; \
                /usr/sbin/ip link set mtu 1420 up dev wg || true; \
                /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
        "
        
        ExecStop=/usr/bin/bash -c " \
                /usr/sbin/ip rule del uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route flush table 200 || true; \
                /usr/bin/wg set wg peer '#PEER_PUBLIC_KEY#' remove || true; \
                /usr/sbin/ip link del dev wg || true; \
        "
        
        [Install]
        WantedBy=multi-user.target
        

        There’s a bit to go through here, so I’ll take you through why it works. Most of it is just setting up WG to receive/send traffic. The bits that are relevant are:

                /usr/sbin/ip rule add uidrange 1001-1001 table 200 || true; \
                /usr/sbin/ip route add #VPN_ENDPOINT# via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add 192.168.0.0/24 via 192.168.0.1 dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add #YOUR_STATIC_EXTERNAL_IP#/32 via #ROUTER_IP# dev enp3s0 table 200 || true; \
                /usr/sbin/ip route add default via #WG_IPV4_ADDRESS# dev wg table 200 || true; \
        

        ip rule add uidrange 1001-1001 table 200 adds a new rule where requests from UID 1001 go through table 200. A table is a subset of ip routing rules that are only relevant to certain traffic.

        ip route add #VPN_ENDPOINT# ... ensures that traffic already going through the VPN - i.e. wireguard traffic - does. This is relevant for handshakes.

        ip route add 192.168.0.0/24 via 192.168.0.1 ... is just excluding local traffic, as is ip route add #YOUR_STATIC_EXTERNAL_IP

        Finally, we add ip route add default via #WG_IPV4_ADDRESS# ... which routes all traffic that didn’t match any of the above rules (local traffic, wireguard) to go to the wireguard interface. From there, WG handles all the rest, and passes returning traffic back.

        There’s going to be some individual tweaking here, but the long and short of it is, UID 1001 will have all their external traffic routed through WG. Any internal traffic between docker containers in a docker-compose should already be handled by podman pods and never reach the routing rules. Any traffic aimed at other services in the network - i.e. sonarr calling sabnzbd or transmission - will happen with a relevant local IP of the machine it’s hosted on, and so will also be skipped. Localhost is already handled by existing ip route rules, so you shouldn’t have to worry about that either.

        Hopefully that helps - sorry if it’s a bit confusing. I learned to set up my own IP routing to avoid wg-quick so that I could have greater control over the traffic flow, so this is quite a lot of my learning that I’m attempting to distill into one place.

        • @beerd
          link
          English
          11 year ago

          Thank you so much, i was just looking for a way to do this the other day!

        • MrWiggles
          link
          fedilink
          English
          11 year ago

          Saving this for later, thank you very much for the detailed writeup. I might look into this for my main machine to partition the vpn tasks from the non-vpn tasks

          • Geronimo Wenja
            link
            fedilink
            English
            11 year ago

            You’re welcome. It has some really nice side-effects - i.e. if I want to quickly grab a file without it being from my normal IP, I can just SSH to the right user on my server and it just works - no configuration, no needing to interrupt other traffic.