• @Deckweiss
    link
    3
    edit-2
    8 months ago

    I know what it is. But it literally says “A stop job is running” and since english is not my first language, I had no good idea how to better express the technicalities of it in a short sentence.

    As for it having nothing to do with systemd:

    I am dual booting arch and artix, because I am currently in the middle of transitioning. I have the exact same packages on both installs (+ some extra openrc packages on artix).

    • About 30% of the shutdowns on arch do the stop job thing. It happens randomly without any changes being done by me between the sessions.

    • 0% of the shutdowns on artix take more than 5 seconds.

    I know that I can configure it. But why is 90 seconds a default? It is utterly unreasonable. You cite windows doing it, but compare it instead to mac, which has extremely fast powerups and shutdowns.

    And back to the technicalities, openrc doesn’t say “a stop job is running”, so who runs the stop job if not systemd?

    • @[email protected]
      link
      fedilink
      88 months ago

      The question you should be asking is what’s wrong with that job which is causing it to run for long enough that the timeout has to kill it.

      Systemd isn’t the problem here, all it’s doing is making it easy to find out what process is slowing down your shutdown, and making sure it doesn’t stall forever

      • @Deckweiss
        link
        -6
        edit-2
        8 months ago

        I will not debug 3rd party apps. I don’t even want to think about my OS nor ask any questions about it. I want to use a PC and do my job. That includes it shutting down asap when I need it to shut down asap.

        systemd default - shutdown not always asap

        openrc default - shutdown always asap

        whatever the heck macs init system is - shutdown always asap

        It may be not the “fault” of systemd, but neither does it do anything helpful to align itself with my needs.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          8 months ago

          The default is as long as it is because most people value not losing data, or avoiding corruption, or generally preserving the proper functioning of software on their machine, over 90 seconds during which they could simply walk away.

          Especially when those 90 seconds only even come up when something isn’t right.

          If you feel that strongly that you’d rather let something malfunction, then you’re entirely at liberty to change the configuration. You don’t have to accept the design decisions of the package maintainers if you really want to do something differently.

          Also, if you’re that set against investigating why your system isn’t behaving the way you expect, then what the hell are you doing running arch? Half the point of that distro is that you get the bleeding edge of everything, and you’re expected to maintain your own damn system

          • @Deckweiss
            link
            -1
            edit-2
            8 months ago

            If an app didn’t manage to shut down in 90seconds, it is probably hanging and there will be “DaTa LoSs” no matter if you kill it after 2 seconds or after 90.


            Been running arch for over 5 years now.

            I track all my hours and for arch maintenance I’ve spent a grand total of ~41 hours (desktop + laptop and including sitting there and staring at the screen while an update is running). The top three longest sessions were:

            1. btrfs data rescue after I deleted a parent snapshot of my rollback (~20h)
            2. grub update (~2h)
            3. jdk update which was fucky (~30min)

            |

            It’s about 8.2 hours per year (or ~10minutes per week) which is less than I had to spend on windows maintenance (~22h/y afair, about half of that time was manually updating apps by going to their website and downloading a newer version).

            Ubuntu also faired worse for me with two weekends of maintenance in a year (~32h), because I need the bleeding edge and some weird ass packages for work and it resulted in a frankenstein of PPAs and self built shit, which completely broke on every release upgrade.

            • @[email protected]
              link
              fedilink
              38 months ago

              btrfs data rescue after I deleted a parent snapshot of my rollback

              Can you expand a bit on that? I thought it didn’t matter if you deleted parent snapshots because the extents required by the child would still be there.

              • @Deckweiss
                link
                1
                edit-2
                8 months ago

                Honestly, I have no idea why it went wrong or why it let me do that. Also my memory is a bit fuzzy since it’s been a while, but as best I can remember what I did step by step:

                1. fuck around with power management configs
                2. using btrfs-assistant gui app, rolled back to before that
                3. btrfs-assistant created an additional snapshot, called backup something, I didn’t really pay attention
                4. reboot, all seemed good
                5. used btrfs-list to take a look, the subvolume that was the current root / was a child of the aformentioned backup subvolume
                6. started btrfs-assistant and deleted the backup subvolume
                7. system suddenly read only
                8. reboot, still read only
                9. btrfs check said broken refs and some other errors,
                10. i tried to let btrfs check fix the errors, which made it worse, now I couldn’t even mount the drive anymore because btrfs was completely borked
                11. used btrfs rescue, which got all files out onto an external drive successfully
                12. installed arch again and rsync the rescued files over the new install, everything works as before, all files are there
                • @[email protected]
                  link
                  fedilink
                  28 months ago

                  btrfs check said broken refs and some other errors,

                  Gotcha. That must have been a kernel bug (or hardware error), none of the userspace utilities could cause it unless they were trying to manipulate the block device directly, which would be really dumb. It’s possible it wasn’t even related to the subvolume manipulation.