• @[email protected]
    link
    fedilink
    59 months ago

    I mean yes if time is an issue, but compiled code on your own hardware is specifically tuned to your machine, some people want that tiny tweak of performance and stability.

    • Trigg
      link
      129 months ago

      The point being most AUR packages are compiled on each update

      • @[email protected]
        link
        fedilink
        -69 months ago

        But compiled on some other machine. Compiling on your own hardware optimizes it for that specific hardware and what that chip supports etc.

        • exu
          link
          fedilink
          English
          149 months ago

          No, AUR packages are compiled on your machine.

          • @kautau
            link
            139 months ago

            Not all of them, that’s why many packages have a [package]-bin version

          • @[email protected]
            link
            fedilink
            1
            edit-2
            9 months ago

            Ah, thought you meant in the AUR. I’m used to OBS where you have binaries and source available (OBS meaning OpenBuildService, not the screen recorder)

    • @[email protected]
      link
      fedilink
      39 months ago

      I use both for different purposes. Gentoo’s feature flags are the reason I wait for compiles, but only for computers a touch the keyboard with. Everything else gets Arch.

    • adONis
      link
      19 months ago

      would you mind elaborating on the benefits? like what does one actually gain in a real-world scenario by having the software tuned to a specific machine?

      disk space aside, given the sheer amount of packages that come with a distro, are we talking about 30% less CPU and RAM usage (give or take), or is it more like squeezing out the last 5% of possible optimization?

      • @[email protected]
        link
        fedilink
        49 months ago

        Closer to thr 5% . Between the intermediate code and final code writing there is an optimization stage. The compiler can reduce redundant code and adjust based on machine. i.e. my understanding is an old 4700 can have different instruction sets available than the latest intel.gen chip features. Rather than compile for generic x86 the optimization phase can tailor to the machine’s hardware. The benefits are like car tuning, at some point you only get marginal gains. But if squeezing out every drop of performance and reducing bytes is your thing then the wasted compiling time may not been seen as waste.