For the last two years, I’ve been treating compose files as individual runners for individual programs.

Then I brainstormed the concept of having one singular docker-compose file that writes out every single running container on my system… (that can use compose), each install starts at the same root directory and volumes branch out from there.

Then I find out, this is how most people use compose. One compose file, with volumes and directories branching out from wherever ./ is called.

THEN I FIND OUT… that most people that discover this move their installations to podman because compose works on different versions per app and calling those versions breaks the concept of having one singular docker-compose.yml file and podman doesn’t need a version for compose files.

Is there some meta for the best way to handle these apps collectively?

  • @[email protected]
    link
    fedilink
    English
    541 year ago

    I think compose is best used somewhere in between.

    I like to have separate compose files for all my service “stacks”. Sometimes that’s a frontend, backend, and database. Other times it’s just a single container.

    It’s all about how you want to organize things.

    • Matt The Horwood
      link
      fedilink
      English
      281 year ago

      I do this, 1 compose file per application. That has all the things that application need, volumes, networks, secrets.

      In single docker host land, each application even has its own folder with the compose file and any other artifacts in it.

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      Yeah this post had me a little worried I’m doing something wrong haha. But I do it just like that. Compose file per stack.

  • chiisana
    link
    fedilink
    English
    371 year ago

    Multiple compose file, each in their own directory for a stack of services. Running Lemmy? It goes to ~/compose_home/lemmy, with binds for image resized and database as folders inside that directory. Running website? It goes to ~/compose_home/example.com, with its static files, api, and database binds all as folders inside that. Etc etc. Use gateway reverse proxy (I prefer Traefik but each to their own) and have each stack join the network to expose only what you’d need.

    Back up is easy, snapshot the volume bind (stop any service individually as needed); moving server for specific stack is easy, just move the directory over to a new system (update gateway info if required); upgrading is easy, just upgrade individual stack and off to the races.

    Pulling all stacks into a single compose for the system as a whole is nuts. You lose all the flexibility and gain… nothing?

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        I prefer compose merge because my “downstream” services can propagate their depends/networks to things that depend on them up the stream

        There’s an env variables you set in .env so it’s similar to include

        The one thing I prefer about include is that each include directory can have its own .env file, which merges with the first level .env. With merge it seems you’re stuck with one .env file for all in-file substitutes

    • JustEnoughDucks
      link
      fedilink
      English
      31 year ago

      That’s what I do. I always thought I was doing it “wrong” but it just made sense to me. I can also just up/down/etc… compose files to individually pull new images, test things, disable a service, and apply config updates without affecting another container at all.

      I even keep my docker config files in a seperate directory so I can backup the docker composes in a second over the network.

      I started by using a single mariaDB instance with multiple databases, but now I see the same benefits from moving to one database container per compose file that needs it to make it even more flexible so I don’t need to start up mariadb and redis before all of my containers.

      File permission problems? Down the compose that needs it, fix it, re-up it without losing any uptime for other services and never having to use docker commands kludged together.

  • @[email protected]
    link
    fedilink
    English
    161 year ago

    I’ve always heard the opposite advice - don’t put all your containers in one compose file. If you have to update an image for one app, wouldn’t you have to restart the entirety of your apps?

    • @lue3099
      link
      English
      3
      edit-2
      1 year ago

      You can reference a single or multiple containers in a compose stack.

      docker compose -f /path/to/compose.yml restart NameOfServiceInCompose

    • @SheeEttin
      link
      English
      31 year ago

      If by app you mean container, no. You pull the latest image and rerun docker compose. It will make only the necessary changes, in this case restarting the container to update.

  • @[email protected]
    link
    fedilink
    English
    111 year ago

    As other have said, I have a root docker directory then have directories inside for all my stacks, like Plex. Then I run this script which loops through them all to update everything in one command.

    for n in plex-system bitwarden freshrss changedetection.io heimdall invidious paperless pihole transmission dashdot
    do
        cd /docker/$n
        docker-compose pull
        docker-compose up -d
    done
    
    echo Removing old docker images...
    docker image prune -f
    
    • @pete_the_cat
      link
      English
      171 year ago

      Or just use the Watchtower container to auto-update them 😉

      • @[email protected]
        link
        fedilink
        English
        71 year ago

        I don’t like the auto update function. I also use a script similar to the one op uses (with a .ignore file added). I like to be in control when (or if) updates happen. I use watchtower as a notification service.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Exactly, when it updates, I want to initiate it to make sure everything goes as it should.

          • @pete_the_cat
            link
            English
            11 year ago

            Nothing off mine is that important that I couldn’t create/rollback the container if it does happen to screw up.

        • chiisana
          link
          fedilink
          English
          11 year ago

          I scream test myself… kidding aside, I try to pin to major versions where possible — Postgres:16-alpine for example will generally not break between updates and things should just chip along. It’s when indie devs not tagging anything other than latest or adhere to semantic versioning best practices where I keep watchtower off and update once in a blue moon manually as a result.

  • @[email protected]
    link
    fedilink
    English
    91 year ago

    I moved from compose to using Ansible to deploy containers. The Ansible container config looks almost identical to a compose file but I can also create folders, config files, set permissions, etc.

    • @SheeEttin
      link
      English
      41 year ago

      Can you give an example playbook?

      • @[email protected]
        link
        fedilink
        English
        61 year ago

        Sure. Below is an example playbook that is fairly similar to how I’m deploying most of my containers.

        This example creates a folder for samba data, creates a config file from a template and then runs the samba container. It even has a handler so that if I make changes to the config file template it will cycle the container for me after deploying the updated config file.

        I usually structure everything as an ansible role which just splits up this sort of playbook into a folder structure instead. ChatGPT did a great job of helping me figure out where to put files and generally just sped up the process of me creating tasks to do common things like setup a cronjob, install a package, or copy files around.

        - name: Run samba
          hosts: servername
        
          vars:
            samba_data_directory: "/home/me/docker/samba"
        
          tasks:
          - name: Create samba data directory
            ansible.builtin.file:
              path: "{{ samba_data_directory }}"
              state: directory
              mode: '0755'
        
          - name: Create samba config from a jinja template file
            ansible.builtin.template:
              src: templates/smb.conf.j2
              dest: "{{ samba_data_directory }}/smb.conf"
              mode: '0644'
            notify: Restart samba container
        
          - name: Run samba container
            community.docker.docker_container:
              name: samba
              image: dperson/samba
              ports:
                - 445:445
              volumes:
                - "{{ samba_data_directory}}:/etc/samba/"
                - "/home/me/samba_share:/samba_share"
              env:
                TZ: "America/Chicago"
                UID: '1000'
                GUID: '1000'
                USER: "me;mysambapassword"
                WORKERGROUP: "my-samba-workergroup"
              restart_policy: unless-stopped
        
          handlers:
          - name: Restart samba container
            community.docker.docker_container:
              name: samba
              restart: true
        
        
  • poVoq
    link
    fedilink
    English
    81 year ago

    The best way is to use Podman’s Systemd integration.

    • Dandroid
      link
      fedilink
      English
      41 year ago

      This is what I use whenever I make my own services or am using a simple service with only one container. But I have yet to figure out how to convert a more complicated service like lemmy that already uses docker-compose, so I just use podman-docker and emulate docker-compose with podman. But that doesn’t get me any of the benefits of systemd and now my podman has a daemon, which defeats one of the main purposes of podman.

      • poVoq
        link
        fedilink
        English
        71 year ago

        Just forget about podman-compose and use simple Quadlet container files with Systemd. That way it is not all in the same file, but Systemd handles all the inter-relations between the containers just fine.

        Alternatively Podman also supports kubernetes configuration files, which is probably closer to what you have in mind, but I never tried that myself as the above is much simpler and better integrated with existing Systemd service files.

        • @vegetaaaaaaa
          link
          English
          11 year ago

          Quadlet

          Requires podman 4.4 though

          • poVoq
            link
            fedilink
            English
            11 year ago

            No, from that version on, it is integrated in Podman, but it was available for earlier versions as a 3rd party extension as well.

            But if you are not yet on Podman 4.4 or later you should really upgrade soon, that version is quite old already.

            • @vegetaaaaaaa
              link
              English
              11 year ago

              you should really upgrade soon

              Debian stable has podman 4.3 and 4.4 is not in stable-backports

      • krolden
        link
        fedilink
        English
        61 year ago

        Podman with systemd works better if you just do your podman run command with all the variables and stuff and then run podman generate systemd.

        Podman compose feels like a band aid for people coming from docker compose. If you run podman compose and then do podman generate systemd, it will just make a systemd unit that starts podman compose. In my experience having all of the config stuff in the actual systemd unit file makes your life easier in the long run. Fewer config files the better I say.

        • poVoq
          link
          fedilink
          English
          51 year ago

          It’s even simpler now that Quadlet is integrated in Podman 4.x or later.

          • krolden
            link
            fedilink
            English
            21 year ago

            Oh yeah I remember reading some stuff about that but didn’t dig too deep. I’ll have to check it out again

      • exu
        link
        fedilink
        English
        21 year ago

        You can use podman pods and generate the systemd file for the whole pod.

        • Dandroid
          link
          fedilink
          English
          11 year ago

          But how do I convert the docker-compose file to a pod definition? If I have to do it manually, that’s a pass because I don’t want to do it again if lemmy updates and significantly changes it’s docker-compose file, which it did when 0.18.0 came out.

      • Max-P
        link
        fedilink
        English
        31 year ago

        You’re probably thinking about systemd-nspawn. Technically yes they’re containers, but not the same flavour of them. It’s more like LXC than Docker: it runs init and starts a full distro, like a VM but as a container.

      • exu
        link
        fedilink
        English
        01 year ago

        Nope, but it integrates very well with Podman.

  • @[email protected]B
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LXC Linux Containers
    NAT Network Address Translation
    Plex Brand of media server package
    VPS Virtual Private Server (opposed to shared hosting)

    [Thread #217 for this sub, first seen 15th Oct 2023, 20:15] [FAQ] [Full list] [Contact] [Source code]

  • TheHolm
    link
    fedilink
    English
    01 year ago

    you can always add Makefile to traverse directories.

  • Carlos Solís
    link
    fedilink
    English
    -11 year ago

    I’m currently using YunoHost behind CG-NAT with a Wireguard VPS bypass, but plan on moving to a Dockerized setup soon because of YNH still using an outdated version of Debian. What do you recommend me to keep my setup as similar to YNH?