I’m planning to set up proper backups for my server, but I’m not sure which software to use. I’ve looked for solutions with encryption, compressed, incremental backups. These seem to be the best options:

Does anyone have experience with these, and if so, what was your experience?

EDIT 2023-12-28:

It seems most people are using Restic of which about half mention using a wrapper such as resticprofiles, creatic and autorestic.

Borg Restic Kopia
3 7 5
  • @[email protected]
    link
    fedilink
    English
    1711 months ago

    I started out with borg. Basically had no problems with it. Then i moved to Restic. For the past few years i am using it, i never experienced any issue with it. Can only recommend Restic.

  • @[email protected]
    link
    fedilink
    English
    13
    edit-2
    11 months ago

    Borg (specifically Borg Matic) has been working very well for me. I run it on my main server and then on my Nas I have a Borg server docker container as the repository location.

    I also have another repository location my on friends Nas. Super easy to setup multiple targets for the same data.

    I will probably also setup a Borg base account for yet another backup.

    What I liked a lot here was how easy it is to make automatic backups, retention policy and multiple backup locations .

    Open source was a requirement so you can never get locked out of your data. Self hosted. Finally the ability to mount the backup as a volume / drive. So if I want a specific file, I mount that snapshot and just copy that one file over.

  • @loganb
    link
    English
    1111 months ago

    Highly recommend restic. Simple and flexible. Plus I’ve actually used it on two occasions to recover from dead boot drives.

  • @ptrckstr
    link
    English
    1011 months ago

    Im using borgmatic, a wrapper around Borg that has some extra functionality.

    Very happy with it, does exactly as advertised.

  • SeriousBug
    link
    fedilink
    English
    711 months ago

    I’ve been using Kopia for all my backups for a couple years, both backing up my desktop and containers. It’s been very reliable, and it has nice features like being able to mount a backup.

  • Matthias Liffers
    link
    fedilink
    511 months ago

    I’m using Autorestic, a wrapper for Restic that lets you specify everything in a config file. It can fire hooks before/after backups so I’ve added it to my healthchecks instance to know if backups were completed successfully.

    One caveat with Restic: it relies on hostnames to work optimally (for incremental backups) so if you’re using Autorestic in a container, set the host: option in the config file. My backups took a few hours each night until I fixed this - now they’re less than 30 minutes.

  • @[email protected]
    link
    fedilink
    English
    511 months ago

    I setup a script to backup my lvm volumes with kopia. About to purchase some cloud storage to send it off site. Been running for a while de duplication working great. Encryption working as far as I can tell. The sync to other repo option was the main seller for me.

    • @rambos
      link
      English
      211 months ago

      Daily backup to backblaze b2 and also to local storage with kopia. Its been running for a year I think, no issues at all. I didnt need a real backup yet, just did some restore tests so far

  • @[email protected]
    link
    fedilink
    English
    511 months ago

    I use restic with a local external drive that is then synced to backblaze b2 via rclone.

    • qazOP
      link
      English
      4
      edit-2
      11 months ago

      Why did you choose this option instead of directly syncing it with restic’s rclone backend?

      • @[email protected]
        link
        fedilink
        English
        311 months ago

        An external hard drive is a lot faster than my internet connection and helps fulfill 3-2-1 requirements.

        • @[email protected]
          link
          fedilink
          English
          4
          edit-2
          11 months ago

          Does it though? I had a similar setup in the past, but I did not feel good with it. If your first backup corrupts that corruption is then synced to your remote location. Since then I have two separate backup runs for local and remote. But restic as well with resticprofile. Remote is a SFTP server. For restic I am using the rclone backend for SFTP since I had some connection issues with the internal SFTP backend (on connection resets it would just abort and not try to reconnect, but I think it got improved since then)

          • @[email protected]
            link
            fedilink
            English
            111 months ago

            I only do automated copy to B2 from the local archive, no automated sync, which as far as I understand should be non-destructive with versioning enabled.

            If I need to prune, etc. I run will manually sync and then immediately restic check --read-data from a fast VPS to verify B2 version afterwards.

        • qazOP
          link
          English
          111 months ago

          So no reliability issues with the rclone backend then?

          • @[email protected]
            link
            fedilink
            English
            211 months ago

            Not actually used it. I started off doing local backups, B2 was an add-on way later down the road.

  • Nyfure
    link
    fedilink
    4
    edit-2
    11 months ago

    Was using borg, was a bit complicated and limited, now i use kopia.
    Its supposed to support multiple machines into a single repository, so you can deduplicated e.g. synced data too, but i havent tested that yet.

      • Nyfure
        link
        fedilink
        311 months ago

        Index of repositories is held locally, so if you use the same repository with multiple machines, they have to rebuild their index every time they switch.
        I also have family PCs i wanted to backup too, but borg doesnt support windows, so only hacky WSL would have worked.
        But the worst might be the speed of borg… idk what it is, but it was incredibly slow when backing up.

        • lemmyvore
          link
          fedilink
          English
          111 months ago

          if you use the same repository with multiple machines, they have to rebuild their index every time they switch

          I’m a beginner with Borg so sorry in advance if I say something incorrect l. I backup the same files to multiple distinct external HDDs and my solution was to use distinct repos for each one. They have different IDs so the caches are different too. The include/exclude list is redundant but I can live with that.

  • pacjo
    link
    fedilink
    English
    311 months ago

    For me it’s restic with creatic wrapper, apprise for notifications and some bash / systemd scripts to make it all connected.

    Everything is in a config file, just as god intended.

  • @dfense
    link
    English
    311 months ago

    I use restic with resticprofiles (one config file), notifications via (self hosted) ntfy.sh and wasabi as backend. Been very happy, runs reliably and has all the features of a modern backup solution, especially like the option to mount backups as if it were a filesystem with snapshots as folders, makes finding that one file easy without having to recover)

  • @ogarcia
    link
    English
    311 months ago

    restic without any doubt. I use it with S3 backend and SSH copy and it has an excellent performance (with copies of years).

    Borg I was using it for a while (to compare) and I do not recommend it, it is not a bad product, but it has a lousy performance compared to restic.

    Kopia I didn’t know it, but from what I have read about it it seems to be very similar to restic but with some additions to make it pretty (like having ui).

    Some people say that Kopia is faster in sending data to the repository (and other people say it’s restic), I think that, unless you need ui, I would use restic.

  • @[email protected]
    link
    fedilink
    English
    211 months ago

    I really like Kopia. I backup my containers with it, my workstations, and replicate to s3 nightly. It’s great.