I looked up specifically examples of this and didn’t find answers, they’re buried in general discussions about why compiling may be better than pre-built. The reasons I found were control of flags and features, and optimizations for specific chips (like Intel AVX or ARM Neon), but to what degree do those apply today?
The only software I can tell benefits greatly from building from source, is ffmpeg since there are many non-free encoders decoders and upscalers that can be bundled, and performance varies a lot between devices due to which of them is supported by the CPU or GPU. For instance, Nvidia hardware encoders typically produce higher quality video for similar file sizes than ones from Intel AMD or Apple. Software encoders like x265 has optimizations for AVX and NEON (SIMD extensions for CPUs).
The performance boost provided by compiling for your specific CPU is real but not terribly large (<10% in the last tests I saw some years ago). Probably not noticeable on common arches unless you’re running CPU-intensive software frequently.
Feature selection has some knockon effects. Tailored features mean that you don’t have to install masses of libraries for features you don’t want, which come with their own bugs and security issues. The number of vulnerabilities added and the amount of disk space chewed up usually isn’t large for any one library, but once you’re talking about a hundred or more, it does add up.
Occasionally, feature selection prevents mutually contradictory features from fighting each other—for instance, a custom kernel that doesn’t include the nouveau drivers isn’t going to have nouveau fighting the proprietary nvidia drivers for command of the system’s video card, as happened to an acquaintance of mine who was running Ubuntu (I ended up coaching her through blacklisting nouveau). These cases are very rare, however.
Disabling features may allow software to run on rare or very new architectures where some libraries aren’t available, or aren’t available yet. This is more interesting for up-and-coming arches like riscv than dying ones like mips, but it holds for both.
One specific pro-compile case I can think of that involves neither features nor optimization is that of aseprite, a pixel graphics program. The last time I checked, it had a rather strange licensing setup that made compiling it yourself the best choice for obtaining it legally.
(Gentoo user, so I build everything myself. Except rust. That one isn’t worth the effort.)
emacs (the
master
branch)it’s surprisingly less buggy and way faster than the prebuit, stable version. it also doesn’t take very long to compile new changes if you keep building from a local git clone
lol, saw title, came to say ffmpeg, read body, it’s your prime example!
I can’t remember what flag or feature it is I’ve more than once found myself having to build from source to enable, but there is one!
I read that compiled software can be more optimized for our devices, but my devices that could benefit the most from compiled software are the ones that are inviable to compile software with.
If you’re seriously wanting to compile optimized software for those devices, you would want to investigate “cross compiling”
I’m not really serious, but I’d consider making a test or two, thanks.
Obviously only for important cases, like compiling custom insults into
sudo
. :)In order to use a bit of Windows only software for controlling my CNC machine, I have to use a Wine patch that is only available in source form. That means building Wine from source if I want to use my CNC
Nginx has a number of compile-time optional features and they aren’t all enabled in the pre-built packages. For example, the ability to echo back HTTP requests for debugging.
I used to have to custom compile nginx to get HTTP/3 and brotli working (significant speed benefits), but now it’s possible to get those in packages on my OS. This makes maintenance far easier or even automatic for me, which is great from a security standpoint.
Anecdotally: the night Mozilla builds were a godsend when I couldn’t afford decent hardware.
Also anecdotally and professionally: when you have a client that insists on using source like most software companies do nowadays; you can use that source along with something like a hash to keep them honest and prevent them from leaving you holding the bag when shtf. (ask me how I know this works. Lol)
Anecdotally: the night Mozilla builds were a godsend when I couldn’t afford decent hardware.
I don’t know much about them, do you happen to know why the nightly builds were better? Did the new features fix a problem?
i wasn’t using the main build; i was using a minimalist build on my ancient laptop and i struggle to remember what it was called now.
FreeBSD.
I needed to set up a whole new small server for a company I had just joined. (Their old beast with MS Exchange was old and fat and dying, with crashes and manual actions every day etc.) I had an expert do the initial install, and he wanted to compile it all in place, right there on this machine. OK well why not.
The FreeBSD server turned out to be a quiet workhorse that never complained about anything, and only about 3 or 4 years later we had the first occasion when a reboot was needed.
Yeah, FreeBSD OS absolutely awesome
Have nothing more to add…Well, maybe: I wanted to setup my work server with FreeBSD and I really had trouble getting Linux VMs up and running
Still don’t know, what I did wrong, but I went the other way round and setup a Linux server, with a FreeBSD VM as “Gate-keeper” (my Wireguard every point), and so it secures my other stuff behindBut I really liked how nice it is to work with. Sadly I was too stupid to set it up right, with multiple services/containers/VMs I needed …
For me the biggest benefit is the ease of applying patches. For example in Nix I can easily take a patch that is either unreleased, or that I wrote myself, and apply it to my systems immediately. I don’t need to wait for it to be released upstream then packaged in my distro. This allows me to fix problems and get new features quickly without needing to mess with my system in any other way (no packages in other directories that need to be cleaned up, no extra steps after updates to remember, no cases where some packages are using different versions and no breaking due to library ABI breaks).
Another benefit that you are pointing at is changing build flags. Often times I want to enable an optional feature that my distro doesn’t enable by default.
Lastly building packages with different micro-architecture optimizations can be beneficial. I don’t do this often but occasionally if I want to run some compute-heavy work it can be nice to get a small performance boost.
OP asked for specific examples, do you have any you think are worth emphasizing?
I love this about Nix. Had a case this year where I’d hit a bug in the upstream, I fixed it and submitted a PR but then could reference that PR directly for the patch file until a new release finally made it out.
Kinda not really answering your question but Arch’s AUR often needs to compile something from source - so the benefit for me is: just having the absolute latest version running, so if there’s a bug I can report it and help the package become better.
And in 5 years time it might be in Debian stable… /s
- I build software that I changed or patched
- When the bat version in the repos was broken I just installed it with cargo which compiles the latest version
- You can get a compiled version with a ‘-git’ package from the AUR if you need the latest features not yet in a stable release
- Some pieces of software I use I made myself so they are compiled by me
- Maybe you want to install some software that is not available precompiled
- The XZ backdoor didn’t work if you compiled it yourself: https://www.openwall.com/lists/oss-security/2024/03/29/4
The question is not really whether the software will be “better.” In most cases, you only compile from source if you have a specific situation where you need, or think you might benefit from, some specific non-default build option. Or if you don’t trust the provider of pre built releases for whatever reason.
didn’t find answers [:] they’re buried in general discussions about why compiling may be better than pre-built. The reasons I found were control of flags and features, and optimizations for specific chips (like Intel AVX or ARM Neon), but to what degree do those apply today?
You won’t build and install directly from source in any proper enterprise environment, simply because validation breaks and (provably) consistency goes with it; and that takes out reliability.
Even accounting for the gains when you’re tuning stuff, or even when it’s a home build, or even when it’s a kernel build and you’re removing or adding drivers or tunable defaults, ultimately you will be building a package as a portable artefact to be submitted for testing or pulled out of backups for easy re-install. Especially when kernel builds take a long time, and even when you’re using makefiles for much of it, you’re STILL going to be building a package, only so you have the process encoded and repeatable and so you don’t have to re-make if it all works (more an issue when building a kernel package took 25 hours, but you get the idea).
So. In short, if someone’s telling you to compile into production from source, it’s still a security risk and it’s also inefficient past the N=1 stage. Irresponsible for TWO reasons, then.
Edit. I coordinated with Support while I was doing Security work in ~2005. You wanna know how to piss off your support worker and fast-track a ticket to ‘no repro’ death? “I compiled it on the machine from source …” and that goes for paid support or gitlab project volunteer support.