There was a time where this debate was bigger. It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder. This compromise makes it easier for the maintainers of the tools / languages, but does take away choice from the user / developer. But maybe that’s not important? What are your thoughts?

  • LeberechtReinhold
    cake
    link
    English
    2210 months ago

    I have yet to find a memory hungry program thats its caused by its dependencies instead of its data. And frankly the disk space of all libraries is minuscule compared to graphical assets.

    You know what’s going to really bother the issue? If the program doesn’t work because of a dependency. And this happens often across all OSes, searching for these are dime a dozen in forums. “Package managers should just fix all the issues”. Until they don’t, wrong versions get uploaded, issues compiling them, environment problems, etc etc.

    So to me, the idea of efficiency for dynamic linking doesn’t really cut it. A bloated program is more efficient that a program that doesn’t work.

    This is not to say that dynamic linking shouldn’t be used. For programs doing any kind of elevation or administration, it’s almost always better from a security perspective. But for general user programs? Static all the way.

    • @thirdBreakfast
      link
      English
      310 months ago

      I read an interesting post by Ben Hoyt this morning called The small web is a beautiful thing - it touches a lot on this idea. (warning, long read).

      I also always feel a bit uncomfortable having any dependencies at all (left-pad never forget), but runtime ones? I really like to avoid.

      I have Clipper complied executables written for clients 25 years ago I can still run in a DOS VM in an emergency. They used a couple of libraries written in C for fast indexing etc, but all statically linked.

      But the Visual Basic/Access apps from 20 years ago with their dependencies on a large number of DLLs? Creating the environment would be an overwhelming challenge.

      • LeberechtReinhold
        cake
        link
        English
        610 months ago

        I kind of agree with your points, but I think there has to be a distinction of libs. Most deps should be static IMHO. But something like OpenSSL I can understand if you go with dynamic linking, especially if it’s a security critical program.

        But for “string parsing library #124” or random “gui lib #35”… Yeah, go with static.

        • @thirdBreakfast
          link
          English
          110 months ago

          Great point. Sometimes the benefit of an external dependency being changeable is a great feature.

    • @uis
      link
      110 months ago

      But for general user programs? Static all the way.

      Does it include browsers?

  • Jamie
    link
    fedilink
    1710 months ago

    The user never had much choice to begin with. If I write a program using version 1.2.3 of a library, then my application is going to need version 1.2.3 installed. But how the user gets 1.2.3 depends on their system, and in some cases, they might be entirely unable unless they grab a flatpak or appimage. I suppose it limits the ability to write shims over those libraries if you want to customize something at that level, but that’s a niche use-case that many people aren’t going to need.

    In a static linked application, you can largely just ship your application and it will just work. You don’t need to fuss about the user installing all the dependencies at the system level, and your application can be prone to less user problems as a result.

    • @[email protected]
      link
      fedilink
      310 months ago

      Only if the library is completely shitty and breaks between minor versions.

      If the library is that bad, it’s a strong sign you should avoid it entirely since it can’t be relied on to do its job.

    • @uis
      link
      110 months ago

      Not to disappoint you, but when I installed HL1 build from 2007, I had a lot ot libraries versions that did not exist back in 2007, but it works just excellent.

  • ono
    link
    fedilink
    English
    15
    edit-2
    10 months ago

    Shared libraries save RAM.

    Dynamic linking allows working around problematic libraries, or even adding functionality, if the app developer can’t or won’t.

    Static linking makes sense sometimes, but not all the time.

    • @[email protected]
      link
      fedilink
      310 months ago

      Shared libraries save RAM.

      Citation needed :) I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.

      • ono
        link
        fedilink
        English
        210 months ago

        I was surprised but I read (sorry I can’t find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times.

        That is easily disproved on my system by cat /proc/*/maps .

          • ono
            link
            fedilink
            English
            4
            edit-2
            10 months ago

            Ah, yes, I think I read Drew’s post a few years ago. The message I take away from it is not that dynamic linking is without benefits, but merely that static linking isn’t the end of the world (on systems like his).

    • CyclohexaneOP
      link
      fedilink
      110 months ago

      if the app developer can’t or won’t

      Does this apply if the app is open source?

    • @uis
      link
      110 months ago

      Not exactly, shared libraries save cache.

  • @colonial
    link
    1410 months ago

    Personally, I prefer static linking. There’s just something appealing about an all-in-one binary.

    It’s also important to note that applications are rarely 100% one or the other. Full static linking is really only possible in the Linux (and BSD?) worlds thanks to syscall stability - on macOS and Windows, dynamically linking the local libc is the only good way to talk to the kernel.

    (There have been some attempts made to avoid this. Most famously, Go attempted to bypass linking libc on macOS in favor of raw syscalls… only to discover that when the kernel devs say “unstable,” they mean it.)

  • @[email protected]
    link
    fedilink
    English
    910 months ago

    disk is cheap and it’s easier to test exact versions of dependencies. As a user I’d rather not have all my non OS stuff mixed up.

    • CyclohexaneOP
      link
      fedilink
      1010 months ago

      From my understanding, unless a shared library is used only by one process at a time, static linking can increase memory usage by multiplying the memory footprint of that library’s code segment. So it is not only about disk space.

      But I suppose for an increasing number of modern applications, data and heap is much larger than that (though I am not particularly a fan …)

    • @uis
      link
      110 months ago

      Look at how bloated android applications are

  • @[email protected]
    link
    fedilink
    410 months ago

    Some languages don’t even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that “static linking” has shades of meaning: it applies to “link multiple objects into a binary”, but often that it excluded from the discussion in favor of just “use a .a instead of a .so”.

    Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don’t care about security so I’m talking about annoyance instead. Some realistic numbers here: dynamic linking might be “rebuild in 0.3 seconds” vs static linking “rebuild in 3 seconds” vs no linking “rebuild in 30 seconds”.

    Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

    Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there’s nothing wrong with RPATH if you’re not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

    Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of “single source of truth”. If you actually read the man pages for the tools you’re using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

    Again, keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

    The big question these days should not be “static or dynamic linking” but “dynamic linking with or without semantic interposition?” Apple’s broken “two level namespaces” is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

      • @[email protected]
        link
        fedilink
        010 months ago

        The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.

        There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.

          • @[email protected]
            link
            fedilink
            0
            edit-2
            10 months ago

            DNS-over-TCP (which is required by the standard for all replies over 512 bytes) was unsupported prior to MUSL 1.2.4, released in May 2023. Work had begun in 2022 so I guess it wasn’t EWONTFIX at that point.

            Here’s a link showing the MUSL author leaning toward still rejecting the standard-mandated feature as recently as 2020: https://www.openwall.com/lists/musl/2020/04/17/7 (“not to do fallback”)

            Complaints that the differences are just about “bug-for-bug compatibility” are highly misguided when it’s useful features, let alone standard-mandated ones (e.g. the whole complex library is still missing!)

    • @colonial
      link
      110 months ago

      NEVER statically link to libc, and probably not to libstdc++ either.

      This is really only true for glibc (because its design doesn’t play nice with static linking) and whatever macOS/Windows have (no stable kernel interface, which Go famously found out the hard way.)

      Granted, most of the time those are what you’re using, but there’s plenty of cases where statically linking to MUSL libc makes your life a lot easier (Apline containers, distributing cross-distro binaries.)

  • @[email protected]
    link
    fedilink
    English
    410 months ago

    Disk space and RAM availability has increased a lot in the last decade, which has allowed the rise of the lazy programmer, who’ll code not caring (or, increasingly, not knowing) about these things. Bloat is king now.

    Dynamic linking allows you to save disk space and memory by ensuring all programs are using the only one version of a library laying around, so less testing. You’re delegating the version tracking to distro package maintainers.

    You can use the dl* family to better control what you use and if the dependency is FLOSS, the world’s your oyster.

    Static linking can make sense if you’re developing portable code for a wide variety of OSs and/or architectures, or if your dependencies are small and/or not that common or whatever.

    This, of course, is my take on the matter. YMMV.

    • @[email protected]
      link
      fedilink
      510 months ago

      Except with dynamic linking there is essentially an infinite amount of integration testing to do. Libraries change behaviour even when they shouldn’t and cause bugs all the time, so testing everything packaged together once is overall much less work.

      • @[email protected]
        link
        fedilink
        English
        010 months ago

        Which is why libraries are versioned. The same version can be compiled differently across OSs, yes, but again, unless it’s an obscure closed library, in my experience dependencies tend to be stable. Then again all dependencies i deal with are open source so i can always recompile them if need be.

        More work? Maybe. Also more control and a more efficient app. Anyway i’m paid to work.

        • @[email protected]
          link
          fedilink
          210 months ago

          More control? If you’re speaking from the app developer’s perspective, dynamic linking very much gives you less control of what is actually executed in the end.

          • @[email protected]
            link
            fedilink
            210 months ago

            The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don’t.

    • @uis
      link
      110 months ago

      Static linking can make sense if you’re developing portable code for a wide variety of OSs

      I doubt any other OS supports linux syscalls

  • @Synthead
    link
    410 months ago

    It seems the world has shifted towards architectures and tooling that does not allow dynamic linking or makes it harder.

    In what context? In Linux, dynamic links have always been a steady thing.

      • @uis
        link
        110 months ago

        but tools like Docker / Containers, Flatpack, Nix, etc. essentially use sort of a soft static link in that the software is compiled dynamically but the shared libraries are not actually shared at all beyond the boundary of the defining scope.

        This garbage practice is imported from windows.

  • @[email protected]
    link
    fedilink
    410 months ago

    Dynamically linked all the way; you only have to update one thing (mostly) to fix a vulnerability in a dependency, not rebuild every package.

    • @colonial
      link
      210 months ago

      Nice link - it’s good to see some hard data when most of the discussion around this is based on anecdotes and technical trivia.

    • @[email protected]
      link
      fedilink
      110 months ago

      Thank you so much. I read this when it was written, and then totally forgot where I read those information.

    • @[email protected]
      link
      fedilink
      110 months ago

      That’s misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.

      • @[email protected]
        link
        fedilink
        210 months ago

        Nothing prevent you to use dynamic linking when developping and static linking with aggressive LTO for public release.

        • @[email protected]
          link
          fedilink
          1
          edit-2
          10 months ago

          True, but successfully doing dynamically-linked old-disto-test-environment deployments gets rid of the real reason people use static linking.

    • @uis
      link
      010 months ago

      Can we get weighted by size?

  • @[email protected]
    link
    fedilink
    210 months ago

    Depending on which is more convenient and whether your dependencies are security-critical, you can do both on the same program. :D

    • CyclohexaneOP
      link
      fedilink
      410 months ago

      The main issue I was targeting was how modern languages do not support dynamic linking, or at least do not support it well, hence sorta taking away the choice. The choice is still there in C from my understanding, but it is very difficult in Rust for example.

      • @[email protected]
        link
        fedilink
        110 months ago

        Yeah, you can dynamically link in Rust, but it’s a pain because you have to use the C ABI since Rust’s ABI isn’t stable, and you have to miss out on exporting more fancy types

        • @[email protected]
          link
          fedilink
          210 months ago

          Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don’t works throught FFI.

  • @uis
    link
    210 months ago

    You can statically link half a gig of Qt5 for every single application(half a gig for calendar, half a gig for file mager, etc) or keep it normal size. Also if there will be new bug in openssl, it is not your headache to monitor for vuln announcements.

    This compromise makes it easier for the maintainers of the tools / languages

    What do you mean? Also how would you implement plug-ins in language that explicitly forbids dynamic loading, assuming such language exists.