I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I’m always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.

For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?

(Sorry if this standard scenario has been discussed - searching didn’t turn up anything.)

  • Joelk111
    link
    fedilink
    English
    arrow-up
    4
    ·
    20 hours ago

    That NAS software company Linus (of Linus Tech Tips) funded has a feature for this planned I think.

    An open-source standalone implementation would be dope as hell. Sure, it’d mean you’d need to double your NAS capacity (as you’d have to provide enough storage as you use), but that’s way easier than building a second NAS and storing/maintaining it somewhere else or constantly paying for and managing a cloud backup.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      such a system would need a strict time limit for restoration after the catastrophe. Otherwise leeching would be too easy.

      • Joelk111
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        That’s an incredibly good point. Bad actors are the worst. Some ideas:

        • Maybe you’d need to contribute your storage capacity +10% (or more), to account for your and other’s downtime during disasters.
        • A time limit after disasters would be necessary. It’s difficult to think of a proper time limit though, as even a month might not be enough time if your entire house burns down.
        • Maybe a payment system could be set up to where, if your server doesn’t ping for a week, your credit card is automatically charged (after pinging you with many emails). Sure, that’d suck, but it’d be better than loosing your data, and cheaper overall than paying for cloud backups. I’m not sure where that money would go. Maybe distributed to those who didn’t experience a disaster, or maybe to the software project, though that would mean people are profiting from a disaster. Maybe it could go to a charity of your choice or something.

        Definitely a difficult problem to solve. I’m sure people smarter than me have ideas beyond mine.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          A time limit after disasters would be necessary. It’s difficult to think of a proper time limit though, as even a month might not be enough time if your entire house burns down.

          and also accounting for low bandwidth connections… whats more, some shitty providers even have monthly data caps

          Maybe a payment system could be set up to where, if your server doesn’t ping for a week, your credit card is automatically charged (after pinging you with many emails).

          yeah, that would be almost a necessary feature. being able to hold on to the backup when you really can’t restore.