cross-posted from: https://lemm.ee/post/4274796

Just wanted to share some love for this filesystem.

I’ve been running a btrfs raid1 continuously for over ten years, on a motley assortment of near-garbage hard drives of all different shapes and sizes. None of the original drives are still in it, and that server is now on its fourth motherboard. The data has survived it all!

It’s grown to 6 drives now, and most recently survived the runtime failure of a SATA controller card that four of them were attached to. After replacing it, I was stunned to discover that the volume was uncorrupted and didn’t even require repair.

So knock on wood — I’m not trying to tempt fate here. I just want to say thank you to all the devs for their hard work, and add some positive feedback to the heap since btrfs gets way more than it’s fair share of flak, which I personally find to be undeserved. Cheers!

  • exu
    link
    fedilink
    English
    3211 months ago

    Agreed, RAID 1 (and 10) are pretty stable.

    Moderately fun fact, RAID 1 in BTRFS is not really RAID 1 in the traditional sense. Rather it’s a guarantee that your data lives on two separate drives. You don’t know which ones though. You could have one copy of everything on a 12TB drive, whith various secondary copies distrivlbuted on three 4TB drives.
    Traditional RAID 1 works ONLY with two drives, with a capacity of the smaller drive as upper limit. The way to extend a traditional RAID 1 array is by adding two new drives and creating a RAID 10 with all four. (Multiple RAID 1 striped)

    • @[email protected]OP
      link
      fedilink
      English
      1111 months ago

      This right here is what has made it so flexible for me to reuse salvaged equipment. You can just chuck a bunch of randomly sized drives at it, and it will give you as much storage as it can while guaranteeing you can lose any one drive. Fantastic.

  • flux
    link
    1011 months ago

    Anything specific advice you would give to others to prevent corruption? Or keep drives healthy?

    • @[email protected]OP
      link
      fedilink
      English
      311 months ago

      I’m not sure I know enough to be giving out advice, but I can tell you what I do. I do have a cron job to run scrub, to keep the bitrot away. I also tend to replace my drives proactively when they get REALLY old — the flexibility of btrfs raid1 lets me do that one drive at a time instead of two, making it much more affordable. You can plan out your storage with the btrfs calculator.

  • @[email protected]
    cake
    link
    fedilink
    611 months ago

    I’ve been using it in Fedora since they switched to it as the default FS. I have not done anything special. I am not trying anything fancy except compress-force=zstd:1. Seems good to me!

    • deadcatbounce
      link
      fedilink
      19 months ago

      Why just :1? The default is :3 and looking at the timings for zstd deflate speed vs compression level (Google for it … ), becomes slow at around 7.

      Don’t mean shit to me but suggest you reconsider.

      • @[email protected]
        cake
        link
        fedilink
        19 months ago

        Slow relative to what? Any zstd compression, while really fast, will be slower than native write speeds to my nvme. A tiny bit of ratio gain isn’t worthwhile to me.

  • @[email protected]
    link
    fedilink
    English
    511 months ago

    I use it on my steam deck microsd to cram more shit in via compression. Main drive is left as ext4 though so case folding can be used for particularly janky windows games or mods.

  • @[email protected]
    link
    fedilink
    111 months ago

    I’ve been wanting to build a raid for a while, what raid controller do you use/would you recommend?

    • @[email protected]OP
      link
      fedilink
      English
      2
      edit-2
      11 months ago

      For a software RAID like this, you don’t want a hardware RAID controller, per se – you just want a bunch of ports. After my recent controller failure, I decided to try one of these. It’s slick as hell, sitting close to the motherboard, and seems rock solid so far. We’ll see!

  • @TCB13
    link
    English
    -1011 months ago

    Yeah BTRFS is way more reliable than Ext4. A simple power failure or other hardware fuckup with Ext4 and you can be sure all your data is gone, with BTRFS your data will survive a lot of shit.

      • @TCB13
        link
        -111 months ago

        Yes it is :) but comparatively I’ve never lost a volume / disk to BTRFS in years of the same scenarios.

    • Quazatron
      link
      1411 months ago

      My experience says otherwise.

      Ext4 is rock solid and will survive power loss without a problem.

      I love btrfs for the compression and snapshot capabilities, but it still has a long way to go to reach ext4 maturity.

      That’s not a shot at btrfs, it’s just that filesystem maturity and reliability take time.

      • @TCB13
        link
        English
        211 months ago

        I can’t share your enthusiasm about Ext4’s safety. I’ve had multiple disks lost to simple power failures at home and more complex hardware failures at datacenters. At the time I migrated to XFS - which also always performed better than Ext4 when things failed - and then moved to BTRFS when become mostly stable.

        • Quazatron
          link
          111 months ago

          I really appreciate how good XFS is, but for the longest time it had a feature missing that I really like to have: shrinking partitions.

    • @[email protected]
      link
      fedilink
      English
      411 months ago

      I’ve been using Ext4 for over 150 years now and I never had any issues with it. I did not only survive multiple power failures but also house fire, couple direct hits with EMP and a zombie apocalypse.

    • @Wispy2891
      link
      311 months ago

      I had the exact opposite experience. A power loss destroyed my btrfs boot drive, it couldn’t be mounted anymore