Yeah another post about backups, but hear me out.

I read most of the other post here on lemmy, read through the documentation from different backup tools (rsync, Borg, timeshift) but all those backup tools are for “static” files.

I mean I have a few docker container with databases, syncthing to sync files between server, Android, Desktop and Mac, a few samba shares between Server, Mac and Desktop.

Per say on Borg’s documentation:

  • Avoid running any programs that might change the files.
  • Snapshot files, filesystems, container storage volumes, or logical volumes. LVM or ZFS might be useful here.
  • Dump databases or stop the database servers.
  • Shut down virtual machines before backing up their images.
  • Shut down containers before backing up their storage volumes.

How I’m supposed to make a complete automated backup of my system if my files are constantly changing ? If I have to stop my containers, shutdown syncthing and my samba shares to make a full backup, that seams a bit to much of friction and prone to errors…

Also, nowhere I could find any mention on how to restore a full backup with a LVM partition system on a new installed system. (User creation, filesystem partition…)

Maybe, I have a bad understanding on how It works with linux files but doing a full backup this way feels unreliable and prone to corrupted files and backup on a server.

VMs are easier to rollback with snapshots and could’t find a similar way on a bare metal server…

I hope anyone could point me to the right direction, because right now I have the feeling I can only backup my compose-files and do a full installation and reconfiguration, which is supposed to be the work of a backup… Not having to reconfigure everything !

Thanks

  • @Anonymouse
    link
    11 year ago

    After all the posts about backups, I started looking at mine with a more critical eye and discovered ways that it’s lacking. I am using duplicity because of the number of backends (I’m using rsync), my ancient NAS has a module for an rsync server, it can do incremental, it can encrypt the backups and it is available on all my distros’ package managers.

    I am excluding files from packages that haven’t changed and other things that can be downloaded, like Docker images. I’ve used it a few times to restore a misplaced "sudo rm -rf . " in a subdir of home with success! But, I realized that a full restore will be time consuming and difficult because I don’t know my LVM structure, installed packages, etc.

    I call duplicity with a script via cron, so I am updating it to dump installed packages, LVM info, sfdisk structure, LUKS headers and fstab into a “config” backup. I’ll have another backup of everything else in another backup archive. My plan is to boot from a USB disk, restore the config backup to a RAM disk, format the drives, apply the LVM structure, set up LUKS from the saved config info, mount the disks to restore via the saved fstab, then use the package manager to install the packages from the config file, then restore the backup on top of that.

    It’s a little more work, but I’m hoping that the backups will he small enough that I don’t need to buy more drives to keep my backups.

    I do have a mysql database that I dump to a backup file which gets scooped up in the drive backup, so I don’t need to take the DB offline and I have my containers’ volumes on a btrfs disk, so I can just take a snapshot and back that up. I haven’t updated the script for that yet, but it’s currently working with LVM snapshots.

    HTH, pray you never really need the backup!