Hi all. I’m trying to choose a configuration for my home storage. Speed is not a priority, I want a balance of stability and performance. I was thinking of making a raid 6 array with an ext4 file system for 4 disks of 2 TB each. Asking for advice, will this configuration be optimal?

Note - I am going to make a raid array based on external usb drives, which I will plug into the orange pi

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    zfs is excellent. It’s enterprise and designed to suit the whole “I’ve got 60 disks filling up a 4ru top loaded SAN. If we expand we have to buy another 60 disk expansion.” and because of that it works perfectly for expansion. You don’t make a single raidz holding 60 disks. You’d make them in groups of say 6 or 8 or 10. Whatever suits your needs for speeds and storage and resilience. When you expand you drop another while raidz into the pool. Maybe it’s another 6 disks into the new storage shelf.

    But since your article in 2016, the openZFS project has promised us individual raidz expanding: In 2021 the announcement: https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/

    In 2022 an update for feature design complete but no code: https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/

    The actual request is here: https://github.com/openzfs/zfs/pull/15022

    And the last announcement update was in June 2033 in the leadership meeting recorded here: https://m.youtube.com/watch?time_continue=1&v=2p32m-7FNpM

    You might think this is slow and yeah it’s snails pace. But it’s not from lack of work it’s really truely because it’s part of the entire strategy of making sure zfs and every update and every feature is just as robust.

    I’m a fan even having hardware fail and disks fail both in enterprise, and at home. Zfs import being so agnostic just pull in the pool doesn’t matter if it was BSD or Linux.

    • chiisana
      link
      fedilink
      11 year ago

      You’re missing the point. For most residential deployments, which appears to be the use case here in this thread, it is not viable to expect deployments pre configured with the intended final/stable quantity of drives. Say someone deploys 6 drives today, a sizeable commitment for most residential deployments, for a RAIDZ2 deployment, they cannot later down the line add 2 more, and then 2 more without affecting the overall redundancy (data on the new 2/2 drives will either have no redundancy or 1:1 redundancy in their own vdevs). Lets also not pretend everyone will easily have a spare cluster sitting around to house all current data while they’re rebuilding the cluster for expansion. This inability to linearly expand as compared to more conventional md raid (or even hardware raid assuming if you have enough ports), where the lowest denomination in expansion is 1 drive at a time, any given time, basically eliminates it as a suitable candidate for most residential usage, as vast majority of residential users will not be expanding their raid in quantities of 6, 8, or 10 drives — and even if they do, vast majority wouldn’t want to take the extra hit on “sacrificing” more drives to parity. All that are things that are perfectly normal and expected in the enterprise space, but not residential.

      And yes, the article I’ve linked has multiple updates which covers what is happening now. It’s not a stagnant outdated article. I’m well aware they’re intending to merge the vdev expansion PR, sometime this year, for the last I don’t even know how many years. I’ll re-evaluate ZFS when it is merged and appropriately battle tested in the wild.