Context for newbies: Linux refers to network adapters (wifi cards, ethernet cards, etc.) by so called “interfaces”. For the longest time, the interface names were assigned based on the type of device and the order in which the system discovered it. So, eth0, eth1, wlan0, and wwan0 are all possible interface names. This, however, can be an issue: “the order in which the system discovered it” is not deterministic, which means hardware can switch interface names across reboots. This can be a real issue for things like servers that rely on interface names staying the same.

The solution to this issue is to assign custom names based on MAC address. The MAC address is hardcoded into the network adaptor, and will not change. (There are other ways to do this as well, such as setting udev rules).

Redhat, however, found this solution too simple and instead devised their own scheme for assigning network interface names. It fails at solving the problem it was created to solve while making it much harder to type and remember interface names.

To disable predictable interface naming and switch back to the old scheme, add net.ifnames=0 and biosdevname=0 to your boot paramets.

The template for this meme is called “stop doing math”.

  • @[email protected]
    link
    fedilink
    English
    65 months ago

    You’re not wrong. But generally the idiocy is in response to beserkeness elsewhere, madness follows…

    • @[email protected]
      link
      fedilink
      15 months ago

      I’m with our binary friend; the systems they try to replace tend to be time tested, reliable and simple (if not necessarily immediately obvious) to manage. I can think of a single instance where a Redhat-ism is better, or even equivalent, to what we already have. In eavh case it’s been a pretty transparent attempt to move from Embrace to Extend, and that never ends well for the users.

      • @renzevOP
        link
        English
        25 months ago

        I can think of a single instance where a Redhat-ism is better

        I don’t know if it would be accurate to call it a redhat-ism, but btrfs is pretty amazing. Transparent compression? Copy-on-write? Yes please! I’ve been using it for so long now that it’s spoiled me lol. Whenever I’m on an ext4 system I have to keep reminding myself that copying a huge file or directory will… you know… actually copy it instead of just making reflinks

        • @[email protected]
          link
          fedilink
          35 months ago

          I’ve never actually tried BTRFS, there were a few too many “it loses all your data” bugs in the early days, and I was already using ZFS by then anyway. ZFS has more than it’s fair share of problems, but I’m pretty confident my data is safe, and it has the same upsides as BTRFS. I’m looking forward to seeing how BCachefs works now it’s in kernel, and I really want to compare all three under real workloads.

          • @renzevOP
            link
            English
            35 months ago

            Ooh, I’ve never heard of bcachefs, sounds exciting! I see it supports encryption natively, which btrfs doesn’t. Pretty cool!

            Personally I’ve never had any issues with btrfs, but I did start using it only a couple years ago, when it was already stable. Makes sense that you’d stick with zfs tho, if that’s what you’re used to.

          • @[email protected]
            link
            fedilink
            15 months ago

            There’s a whole bunch of “it loses all your data” bugs in OpenZFS too, ironically, although it’s way way less fragile than btrfs in general.

            That said, the latter is pretty much solid too, unless you do raid5-like things.

            • @[email protected]
              link
              fedilink
              15 months ago

              Yeah, I know there was one a while back, and if you don’t use ECC RAM, given enough time, it will eat your data as it tries to correct checksum errors due to memory corruption. That’s why we keep backups, right. Right?

              I tend to assume that every storage system will eventually lose data, so having multiple copies is vital.

      • @[email protected]
        link
        fedilink
        English
        35 months ago

        so does RPM.

        Careful. Jeff’s format gives us really great advantages from an atomic package that we don’t have elsewhere. THAT, at least, was a great thing.

        Lennart’s Cancer, though, can die in a fire.

        • @[email protected]
          link
          fedilink
          25 months ago

          Atomic updates are amazing. But the package manager is slow as hell. SuSE managed to make zypper much faster using the same package format.

          • @anyhow2503
            link
            35 months ago

            The only thing that’s slow is dnf’s repository check and some migration scripts in certain fedora packages. If that’s the price I need to pay to get seamless updates and upgrades across major versions for nearly a decade, then I can live with that.

        • @renzevOP
          link
          English
          15 months ago

          deleted by creator