I see many posts asking about what other lemmings are hosting, but I’m curious about your backups.

I’m using duplicity myself, but I’m considering switching to borgbackup when 2.0 is stable. I’ve had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).

I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.

  • wpuckering
    link
    fedilink
    English
    32 years ago

    I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it’s disposable (I don’t backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn’t matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.

      • wpuckering
        link
        fedilink
        English
        22 years ago

        Sure, I won’t post exactly what I have, but something like this could be used as a starting point:

        #!/bin/bash
        now="$(date +'%Y-%m-%d')"
        
        echo "Starting backup script"
        
        echo "Backing up directories to Wasabi"
        for dir in /home/USERNAME/Docker/*/
        do
            dir=${dir%*/}
            backup_dir_local="/home/USERNAME/Docker/${dir##*/}"
            backup_dir_remote="$now/${dir##*/}"
        
            echo "Spinning down stack"
            cd $backup_dir_local && docker compose down --remove-orphans
        
            echo "Going to backup $backup_dir_local to s3://BUCKET_NAME/$backup_dir_remote"
            aws s3 cp $backup_dir_local s3://BUCKET_NAME/$backup_dir_remote --recursive --profile wasabi
        
            echo "Spinning up stack"
            cd $backup_dir_local && docker compose up --detach
        done
        aws s3 cp /home/USERNAME/Docker/backup.sh s3://USERNAME/$now/backup.sh --profile wasabi
        
        echo "Sending notification that backup tasks are complete"
        curl "https://GOTIFY_HOSTNAME/message?token=GOTIFY_TOKEN" -F "title=Backup Complete" -F "message=All container data backed up to Wasabi." -F "priority=5"
        
        echo "Completed backup script"
        

        I have all of my stacks (defined using Docker Compose) in separate subdirectories within the parent directory /home/USERNAME/Docker/, this is the only subdirectory that matters on the host. I have the backup script in the parent directory (in reality I have a few scripts in use, since my real setup is a bit more elaborate than the above). For each stack (ie. subdirectory) I spin the stack down, make the backup and copy it up to Wasabi, then spin the stack backup, and progress through each stack until done. Then lastly I copy the backup script up itself (in reality I copy up all of the scripts I use for various things). Not included as part of the script and outside of the scope of the example is the fact that I have the AWS CLI configured on the host with profiles to be able to interact with Wasabi, AWS, and Backblaze B2.

        That should give you the general idea of how simple it is. In the above example I’m not doing some things I actually do, such as create a compressed archived, validate it to ensure there’s no corruption, pruning of files that aren’t needed for the backup within any of the stacks, etc. So don’t take this to be a “good” solution, but one that would do the minimum necessary to have something.

    • @[email protected]
      link
      fedilink
      English
      12 years ago

      I basically do the exact same thing. A few mins of down time at 5am isn’t gonna bother me.

      I’ve tested a few times nuking my containers and volumes and kicking off my ansible playbooks which redeploy and restore from s3 backups.

      Haven’t had to redeploy from scratch yet though

    • @easeKItMAn
      link
      English
      12 years ago

      Can relate to the approach. Keeping host barebones and everything dockerized + data volumes hosted separately will ease maintanance. For rapid redeployment a custom script will set up firewall/fail2ban/SSH/smartCTL/crontab/docker/docker-compose and finally load from another instance backups of all docker images. Complete setup from scratch takes 10-15 minutes. Tried Ansible but ended up custom scripting.

      All my data is stored offsite twice a year. Data from high value is stored on a SSD as data volume and 2 other SSD’s as encrypted TAR + AWS S3. Rotation daily/weekly.