Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.
Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.
Hi!
First I’d like to clarify that I’m not “anti-container/Docker”. 😅
There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don’t wanna copy-paste everything from there, but I’ll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:
Some high level points on the “why”:
Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it’s nice not to have to worry about a
docker build
command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the nextCost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn’t guarantee reproducibility and has poor performance to boot (see below)
Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default
I think it’s also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don’t really apply to the latter.
If your dev documentation includes your devs running
docker build
, you’re doing docker wrong.The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don’t have to build them and can just run the existing image.
Then for development, you simply use a bind mount to ensure your local copy of the code is available in the container instead of the copy the container was built with.
That doesn’t solve the performance issues on Windows and Mac, but it does prevent the “my environment is broke” issues that docker is designed to solve
Agreed, we still do this in the areas where we use Docker at day job.
I think the mileage with this approach can vary depending on the languages in use and the velocity of feature iteration (ie. if the company is still tweaking product-market fit, pivoting to a new vertical, etc.).
I’ve lost count of the number of times where a team decides they need to
npm install
something with a heavynode-gyp
step to build native modules which require yet another obscure system dependency that is not in the base layer. 😅Are you talking about Docker Desktop and/or Docker Hub? Because plain old docker is free and open source, unless I missed something bug. Personally I’ve never had much use for Docker Desktop and I use GitLab so I have no reason to use Docker Hub.
I believe this is the Docker Desktop license pricing.
On an individual scale and even some smaller startup scales, things are a little bit different (you qualify for the free tier, everyone you work with is able to debug off-the-beaten-path Docker errors, knowledge about fixes is quick and easy to disseminate, etc.), but the context of this article and the thread on Mastodon that spawned it was a “unicorn” company with an engineering org comprised of hundreds of developers.
My point is that Docker Desktop is entirely optional. On Linux you can run Docker Engine natively, on Windows you can run it in WSL, and on macOS you can run it in a VM with Docker Engine, or via something like hyperkit and minikube. And Docker Engine (and the CLI) is FOSS.
I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don’t have any real
docker
knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.We are over 1000 developers and use
docker ce
just fine. We use a self hosted repository for our images. IT is configuring new computers to use this internal docker repository by default. So new employees don’t even have to know about it to do their firstdocker build
.We all use Linux on our workstations and laptops. That might make it easier.
You are living my dream!
I think this is the key piece; the experience of Docker on Linux (including WSL if it’s not hooking into Docker Desktop on Windows) and on macOS is just so wildly difference when it comes to performance, reliability and stability.
Op comes off a bit, uninformed. E.g. I use docker engine and docker compose inside WSL2 on windows and performance is fine, then I use Intellij to manage images/containers, the service tab handles the basics. If I need to do anything very involved I use the cli.
Docker is fine, the docker desktop panic really only revealed who never took the time to learn how to use docker and what the alternative UIs are.
What makes you say that?
My team relies on Docker because it is reproducible…
Highly recommended viewing if you’d like to learn more about the limits of reproducibility in the Docker ecosystem.
Here is an alternative Piped link(s): https://piped.video/watch?v=pfIDYQ36X0k
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
You might be interested in this article that compares nix and docker. It explains why docker builds are not considered reproducible:
and why nix builds are reproducible a lot of the time:
Containerization has other advantages though (security) and you can actually use nix’s reproducible builds in combination with (docker) containers.
That seems like an argument for maintaining a frozen repo of packages, not against containers. You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.
I think this is a misguided way to workaround proper toolchain setup. Nix is pretty cool though.
I am not arguing against containers, I am arguing that nix is more reproducible. Containers can be used with nix and are useful in other ways.
This is essentially what nix does. In addition it verifies that the packages are identical to the packages specified in your flake.nix file.
This is essentially what Nix does, except Nix verifies the external software is the same with checksums. It also does hermetic builds.
Nix is indeed cool. I just see it as less practical than maintaining a toolchain for devs to use. Seems like reinventing the wheel, instead of airing-up the tires. I could well be absolutely wrong there - my experience is mainly enterprise software and not every process or tool there is used because it is the best one.
@nickwitha_k @uthredii I’d like to think a better analogy would be that nix is like using a 3D model of a wheel instead of a compass and a straightedge to make wheels hehe 🙃
I quite like the sound of Nix, every time I touch on it but haven’t really dug in yet. You’re making me really want to though.
There are definately some things preventing Nix adoption. What are the reasons you see it as less practical than the alternatives?
What are alternative ways of maintaining a toolchain that achieves the same thing?
I see it as less practical mainly due to the extant tooling and age/maturity of the project.
The ways that I’m most familiar with are use of software like Artifactory - basically a multi-repo. Using such a tool, any package or artifact can be readily retained for future use. Then, for builds, one only needs to ensure that it is used as the package source, regardless of type (PyPy, Docker image, binary, RPM, etc).
Alternatively, one can use individual repos for any relevant package type but that’s a bit more overhead to manage.
I’ll certainly give this a read!
Are you saying that nix will cache all the dependencies within itself/its “container,” or whatever its container replacement would be called?
Appreciate the in-depth response! I’ve always been interested in Nix but I’m scared of change lol. And I’m a single systems administrator on a team of mostly non-technicals so large changes like that are … less necessary. Plus you know, mostly dealing with enterprise software on windows unfortunately. One of these days.
deleted by creator
NixOS on WSL2 is actually my development environment of choice these days! (With my tiling window manager komorebi, of course! 😀)
deleted by creator
I’ve been looking for a tiling vm for windows but haven’t come across komorebi, thanks for mentioning it!
I remember when I first got a work Macbook and was confused why I had to install some ‘Docker Desktop’ crap.
I also learnt how much Docker images care about the silicon they’re built on… Fucking M1 chip can be a pain…