I’ve been using Portainer to manage my homelab stacks from a single dashboard, which is more convenient than the CLI, but I’m not very satisfied with it so I’ve been looking for alternatives.
Portainer often fails to deploy them and is either silent about it, or doesn’t give me much information to work with. The main convenience is that (when it works) it automatically pulls the updated docker compose files from my repo and deploys it without any action on my part.
Docker Swarm and Kubernetes seem to be the next ones in line. I have some experience with K8s so I know it can be complex, but I hope it’s a complexity most paid upfront when setting everything up rather than being complicated to maintain.
Do you have any experience with either one of these, or perhaps another way to orchestrate these services?
Docker Compose as is.
I used Portainer for like 2 years when I first learned Docker (I only used to deploy compose file and motoring the container), but it’s really shit when you know how it works.
I use Portainer, never had any issues with it and I’m running around 40 Stacks at any given time. There’s also Yacht, which is nice, but not quite as feature rich as Portainer but it’s super easy to use in comparison.
Currently it’s in the CLI, I just split my compose files in different concerns, and just use a bash alias that uses a wildcard to call them all.
But now as I’m adding a RPi in the stack to add some monitoring and a few light stuff, I’m also thinking of going to Kube. But as you say, it may be tough ^^
Ive used both Portainer or yacht. Its a decent ways.to manage docker stacks/aps.it kinda depends on the wishes you have app wise. Im also trying out a project named cosmos.(simplyfied portainer like app but with a focus on ease of use) on afriend server
This is definitely an over-engineered setup…
- I store my Docker Compose files in an internal-only git repo (hosted on Gitea).
- Drone is my CI/CD system.
- I use Renovatebot to look for updates to container tags (never pull
latest
).
My workflow is this:
- Renovatebot runs once a night (at midnight) via Drone workflow in a Docker container (I’ve written about this here). Renovate scans a specific lists of repos it is allowed to access. If a new image tag is found, it opens a PR in Gitea.
- I manually log in to Gitea and approve the PR.
- The PR approval (merging to
master
) kicks off a Drone workflow that does the following:- Runs an Alpine Linux container
- SSHes from the Alpine Linux container into my Docker host (using a SSH key stored in Drone as a secret)
- Runs a script (on the Docker host) that basically runs
git pull
, then for each Docker Compose file in a directory, rundocker compose -f "$D" pull
anddocker compose -f "$D" up -d
. - If there is a failure, Drone emails me using the
drillster/drone-email
plugin
I’ve written about step 3 here.
This means I never manually update Docker Compose files, I let Renovate manage everything. I approve PRs, then I walk away and let the scripts run.
I also run a single-node K3s cluster that has its Flux config hosted on GitHub. Again, using Renovate to open PRs, and Flux watches for changes to
master
, which then redeploys applications.