I like immutable Linux for this reason. If you use almost exclusively containers and flatpaks you can rebuild easily.
As someone I’d still consider a noob, I did this less than a month after getting a new laptop last January. I probably broke something trying to get the headphone jack working on it and then Bluetooth stopped working properly as well after installing Steam, so I started over. All I know for certain is I ended up destroying a folder I shouldn’t have on accident, which bricked the system pretty much and made nothing launchable, terminal included. This was on MX and haven’t had issues since reinstalling.
One of the big selling points of Linux to me was I can automate my install from end to end. I haven’t bothered automating the installer, but once it boots I run a playbook to set everything up and restore most of my homedir from backup. Everything down to setting my custom keyboard shorts, extensions and wallpaper is covered.
These days I run Silverblue and I’m trying to find the time to put together my own build pipeline to build my own images on top of Silverblue’s.
Either way, I have no fear of reinstalls.
I just rollback my snapper.
My brother reinstalled his windows more often than I ran “zypper dup”
I reinstall at the drop of a hat. Pretty much any excuse to try another distro or configuration I was uncertain about.
The whole point of doing a separate partition for your home directory is to do just that… The fuck is this even supposed to mean.
Uh… I don’t have a separate partition for /home. I have a separate zfs filesystem for it though. If I run into issues, I can restore from snapshot and not affect it.
Same, but BTRFS
That’s fair. I chose ZFS because I’ve used it before. And understand it fairly well already. I know nothing about BTRFS, so perhaps you could educate me a little. I’m working on setting up a cloudstack host using ZFS RAID 10. Does BTRFS have a flexible architecture to where you could do something similar?
Edit: Perhaps you could also inform me of the speeds of BTRFS too. From what I understand, ZFS outperforms BTRFS in large datasets, but I don’t know where the cutoff is. I’ll let you know it would need to run 12 ea 10TB HDDs.
I treat all my data as ephemeral, no need for separate home partition.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again. So, no, it’s only productive if you are in a fucked-up environment where changes bring more breakage than they fix.
It’s useful if you don’t plan to do the same thing again, though. So if you are just trying random stuff, yeah, go ahead.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again
Sure, but nobody’s likely to do that. If I wiped my system now, I doubt I could get it back to exactly the same state if I tried. There are way too many moving parts. There are changes I’ve forgotten I ever applied, or only applied accidentally. And there are things I’d do differently if I had the chance to start over (like installing something via a different one of the half-dozen-or-so methods of installing packages on my distro).
For example, I have Docker installed because I once thought a problem I had might have been Podman-specific. Turned out it was not. But I never did the surgery necessary to fully excise Docker. I probably won’t bother unless and until there is a practical reason to.
Try Root on ZFS.
If you run into an issue suddenly, you can restore to snapshot.
If the issue doesn’t resolve itself, reinstall, that works for me as a catch all solution because I use Linux like a Chromebook, web browsing.
That is idiotic, there is absolutely a reason to reinstall in some cases
Meh, snapper rollback
Unless the drive gets corrupted or infected with malware, you can just load a previous snapshot. That’s much faster and easier than reinstalling.
Snapshot as in a VM?
Most people run their OS on physical hardware.
You can run your desktop inside of a VM with the GPU and USB PCIe devices passed though.
However, I think they are talking about btrfs
Btrfs has snapshots. They can be created instantly and don’t use any extra space until the files are changed.
Ah, yeah, I have read about that, I do feel a bir hesitant to use BTRFS so I didn’t think about that.
The Linux machines I have worked with all ran ext3/4 or xfs.
To be completely fair, I never gave BTRFS a proper chance, at first because it felt too new and unstable when I heard about it, and later I heard that it was developed by Facebook and let my distaste for that company color my perceptions of btrfs.
But I just checked the wikipedia article and saw that plenty of reputable oranizations have worked on btrfs, so I guess I’ll get it a go when I build a NAS…
Thanks for reminding me of it, I may get set in my ways from time to time but I do genuinely try to learn and change my way of thinking.
I wouldn’t use it for a NAS. You want ZFS for that.
Btrfs is good for small setups with either single or dual disks.
Just don’t use RAID 5 or 6, it’s still under development and not ready for use yet.
And often the fastest option even lol
Not when you’re “stuck”, tho. You understand the problem, boot live system, fix it and learn from your mistakes. Like, my first reinstalls of arch were due to not understanding I can just chroot or pacstrap some packages I forgot, for example.
Some times but not most, like Windows. macOS is the same way thanks to its *nix underpinnings. I honestly can’t remember a time I ever reinstalled the system to fix a problem.
With the way most distros are structured, you should never need a reinstall, since reinstalling the packages will fix any issues with broken system files. Broken configuration wouldn’t be as easy to fix, but still something you should be able to fix.
The only reason to be reinstalling, in my eyes, is if you have a mess of packages and configuration you don’t remember, and want to get a clean slate to reconfigure instead of trying to figure out why everything was set up in a certain way.
As an IT guy who has worked professionally as a Linux sysadmin.
While you are correct, the factor you are missing is time.
There have been countless times I have reinstalled Linux machines because it is faster than troubleshooting the issue
If you do it right you should be able to trigger rebuild within about 20 min by kicking off the right automation.
Virtualization and containerization are your friends. Combine that with Ansible and you are rock solid.
Professionally on a non-recurring issue - absolutely.
With my stuff at home? Only if the wife suffers from the downtime.Fair, but machines at work as sysadmin are a different thing - hopefully there you’re also dealing with fast deployment, prepared ahead of time. But if the issue is that you messed something up on your own computer, ignoring the issue in favor of reinstalling sounds likely to leave you oblivious to what the issue was, and likely to repeat your mistake.
That is fair, but ignores compounding issues like installing several software packages over years and forgetting about them, and something like that causes an issue years after installing and forgetting about the software, then it is far easier to just reinstall.
It’s a joke.
I thought jokes were supposed to be funny…
Tbf “funny” is, by nature, subjective. Something may be funny to others but not to you, just as you may like onions while I may not, or I may find Shakira attractive while you may not, or I may be into pokemon but you may not, etc.
So, jokes are supposed to be funny, to someone, but you’re not necessarily that “someone.”
I think people do that even in Linux, sometimes problemes are still very hard to solve and reinstalling is just faster, maybe I’m the only one. On another hand there is distro hopping ╮(︶▽︶)╭
Fucking up your computer so much you decide to “distro hop” by reinstalling a new os.
Isn’t that what everyone meant? Just me? Oh
My distro hops have been more like distro evacuations.
Whaddya talking about I nuke my shit all the time.