Yeah another post about backups, but hear me out.

I read most of the other post here on lemmy, read through the documentation from different backup tools (rsync, Borg, timeshift) but all those backup tools are for “static” files.

I mean I have a few docker container with databases, syncthing to sync files between server, Android, Desktop and Mac, a few samba shares between Server, Mac and Desktop.

Per say on Borg’s documentation:

  • Avoid running any programs that might change the files.
  • Snapshot files, filesystems, container storage volumes, or logical volumes. LVM or ZFS might be useful here.
  • Dump databases or stop the database servers.
  • Shut down virtual machines before backing up their images.
  • Shut down containers before backing up their storage volumes.

How I’m supposed to make a complete automated backup of my system if my files are constantly changing ? If I have to stop my containers, shutdown syncthing and my samba shares to make a full backup, that seams a bit to much of friction and prone to errors…

Also, nowhere I could find any mention on how to restore a full backup with a LVM partition system on a new installed system. (User creation, filesystem partition…)

Maybe, I have a bad understanding on how It works with linux files but doing a full backup this way feels unreliable and prone to corrupted files and backup on a server.

VMs are easier to rollback with snapshots and could’t find a similar way on a bare metal server…

I hope anyone could point me to the right direction, because right now I have the feeling I can only backup my compose-files and do a full installation and reconfiguration, which is supposed to be the work of a backup… Not having to reconfigure everything !

Thanks

  • @fubo
    link
    61 year ago

    What are you trying to accomplish? Is a filesystem-level backup really the thing you need for those services, or do you need something more like a database dump or replica?

  • @GustavoM
    link
    English
    21 year ago

    …what? All I need is a single bash script (and less than 5 minutes) to recover all my previous stuff back.

  • UnfortunateShort
    link
    21 year ago

    All you need is restic, rclone, snapper and btrfs. Snapper and btrfs work almost on their own, rclone helps you connect to remote/cloud storage and restic enables very straightforward backups.

    As for restic, you just need to mind that it creates lock-files, so if there’s an unexpected shutdown, you might need to delete them manually. The advantage is that you can access one repo from multiple machines. And don’t forget to run a cleanup command in your script (in case you automate it), because there are no automatic ones.

    rclone is usually rather easy to use, but your mileage may vary depending on the type of storage you want to use. Especially if you want to tune for maximum performance.

  • @Anonymouse
    link
    11 year ago

    After all the posts about backups, I started looking at mine with a more critical eye and discovered ways that it’s lacking. I am using duplicity because of the number of backends (I’m using rsync), my ancient NAS has a module for an rsync server, it can do incremental, it can encrypt the backups and it is available on all my distros’ package managers.

    I am excluding files from packages that haven’t changed and other things that can be downloaded, like Docker images. I’ve used it a few times to restore a misplaced "sudo rm -rf . " in a subdir of home with success! But, I realized that a full restore will be time consuming and difficult because I don’t know my LVM structure, installed packages, etc.

    I call duplicity with a script via cron, so I am updating it to dump installed packages, LVM info, sfdisk structure, LUKS headers and fstab into a “config” backup. I’ll have another backup of everything else in another backup archive. My plan is to boot from a USB disk, restore the config backup to a RAM disk, format the drives, apply the LVM structure, set up LUKS from the saved config info, mount the disks to restore via the saved fstab, then use the package manager to install the packages from the config file, then restore the backup on top of that.

    It’s a little more work, but I’m hoping that the backups will he small enough that I don’t need to buy more drives to keep my backups.

    I do have a mysql database that I dump to a backup file which gets scooped up in the drive backup, so I don’t need to take the DB offline and I have my containers’ volumes on a btrfs disk, so I can just take a snapshot and back that up. I haven’t updated the script for that yet, but it’s currently working with LVM snapshots.

    HTH, pray you never really need the backup!

  • @bleph
    link
    1
    edit-2
    1 year ago

    For a Dropbox-y experience (albeit slow) I have been using SpiderOak. I wanted an E2E encrypted cloud backup and it fits the bill (Snowden Approved™). I can’t help feeling I could get faster and cheaper results with something hand-rolled but… time

  • @[email protected]
    link
    fedilink
    11 year ago

    You mentioned it, but snapshots are the key. Lvm, btrfs, zfs. Take a snapshot, back it up, delete it.