Does anybody have experience with both systems enough to compare them?

I’m currently using ifupdown on my Debian server as that’s the default, but it seems that the modern way of managing the local network is via systemd-networkd so I’m contemplating putting the effort in to migrate.

Would those of you who have experience with it, recommend it?

In my short investigation, I have made the following observations:

  • using networkd means you can use networkctl to manually control the interfaces which is quite convenient
  • networkd aims to be fully declarative
  • networkd separates the creation of virtual interfaces (netdev files) from their configuration (network files)
  • networkd doesn’t support all networking features (e.g. namespaces)
  • networkd is systemd, but surprisingly I can’t find information on how to create other unit files that depend on the individual network files going up or down, other than networkd-dispatcher. I don’t like dispatcher because just like ifupdown it triggers all the scripts and you need if tests to exclude all interfaces you don’t need to be affected. I’d like to write unit files that can be targeted to activate and deactivate when a particular interface goes up or down.
  • networkd, other than via dispatcher, does not seem to support adding arbitrary commands to run like ifupdown supports via e.g. pre-down, post-up, etc.
  • @TheInsane42
    link
    English
    -2
    edit-2
    11 months ago

    I’m using ifupdown2 and have services depending on the state of virtual network devices (BindsTo=sys-devices-virtual-net-.device).

    I hate systemd with a passion, as the refuses to wait for networking when you haven some service specified to be started After networking, but it start when the start of networking actions has sbeen tarted (not after networking is finished bringing up everything)

    The only think I let systemd do is monitor the state of the devices and set the interface name, as udev seems to mess these names up in combination with systemd (and there is to much depending on systemd now in Debian to get rid of the junk)

      • @TheInsane42
        link
        English
        111 months ago

        If you apply what is written at https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/#cutthecraphowdoimakesurethatmyservicestartsafterthenetworkisreallyonline it will work.

        Then I’ll need to do a global rewrite of all distrubution delivered service files and replace network.target with network-online.target. Then I don’t understand why maintainers all keep using network.target. For every service that needs some kund of netwirj active I alkready have iverrides to link then to the virtual or physical interfaces they actually use.

        Systemd-networkd has a learning curve, once you learn it you’ll find it superior and more flexible than anything else.

        I still find it a solution desperately looking for a problem to solve. Yes, you can control loads of dependencies, but you could already do that with the init scripts. The main selling point on introduction was ‘it is faster’. Why would you want something faster when you use it once every kernel update? When you design your solutions correctly (redundant), you won’t even notice a reboot of 1 system in the setup.

        • @TCB13
          link
          English
          2
          edit-2
          11 months ago

          Then I don’t understand why maintainers all keep using network.target.

          Because most services are able to dynamically accommodate networking changes and act accordingly and it’s rare to have cases where we really need a working link with addresses and a proper connection when the service starts. For those special cases, as described, you need to enable systemd-networkd-wait-online.service and include After=network-online.target and/or Wants=network-online.target in your service.

          Programs should be designed to detect and react to networking changes and both Apache and Nginx are good examples of software that does that. There are simpler cases like stuff that needs to bind to non-existent IPs at boot (before networking) that can be dealt with ipv4.ip_nonlocal_bind and net.ipv6.ip_nonlocal_bind as described here.

          This may also interest you as it includes what “up” means a few other details: https://systemd.io/NETWORK_ONLINE/

          I still find it a solution desperately looking for a problem to solve.

          Systemd does a LOT more than “you can control loads of dependencies”, it solves tons of painful issues and provides a cohesive ecosystem of tools to manage stuff like:

          Systemd is very useful and most people (including myself) don’t know about half of the things it can do. It isn’t teached because of the current poor state of thing when it comes to software development but if you use it you’ll quickly find how to have systems running very well with less processes than conventionally.

          • @TheInsane42
            link
            English
            111 months ago

            Replace a bunch of poorly integrated tools such as NetworkManager

            You got me here… I’m now battling NetworkManager in scripting (alas, still no Ansible in place there, I doubt it’ll be long before I have implemented it), the thing just refuses any configuration via files. Time to dive into the networkd setup. I also hope I can get rid of the very unstable ppoe and wide-dhcpv6-client setup I have here now via ifupdown2 on Debian. The restart I like as well. Time for a deep dive I guess. On Debian NM never caught on, thank $preferredDdiety, but at work it’s a disaster.

              • @TheInsane42
                link
                English
                111 months ago

                I’ll see what I can manage. Thanks for the pointers.

                When I can manage simple ipv4 networking via networkd I’m already happy, as it means I can ditch NM again at work, that’s giving me a lot more headaches then a flapping SLAAC that I’m not dependent on. (already switched back to my super stable tunnel) The situation here is a setup with 2 ipv6 tunnels, 1 ipv6 SLAAC, source based routing and no default gateway in main routing table for ipv6. Everything runs via the ipv4 pppoe connection. (and a load of vlans both sides of my router to internet)