deleted by creator
That was the cheapest option. 🤭
deleted by creator
I appreciate it. The modern world has destroyed my attention span. But I wouldn’t even need to be awake to read that one.
Just download the GBs when you get home
Came to post the same. Seems like the most awkward possible way to phrase that.
Your “Disks not included” suggestion, or heck, just “empty” would surely be better.
Oh no.
Unfortunately I have a lot of experience with this: attaching permanent array members via USB is a bad idea. OP, if it’s not too late, and assuming you haven’t already and decided to double down on yolo, I’d recommend reading about the downsides of this approach. It is easy to find relevant discussions (and catastrophes) in r/zfs.
Thunderbolt enclosures are a bit more expensive, but they won’t periodically fuck up your shit just because.
Thanks for the warning ⚠️🙏
This isn’t my first rodeo with ZFS on USB. I’ve been running USB for a few years now. Recently I ran this particular box through a battery of tests and I’m reasonably confident that with my particular set of hardware it’ll be fine. It passed everything I threw at it, once connected to a good port on my machine. But you’re generally right and as you can see I discussed that in the testing thread, and I encountered some issues that I managed to solve. If you think I’ve missed something specific - let me know! 😊
Is it because USB doesn’t expose proper UUIDs or something?
Bus issues usually. Having a disk (or 4) drop out of a ZFS filesystem regularly isn’t a good time.
If you can find a combination of enclosure, driver/firmware and USB port that provides you with a reliable connection to the drive then USB is just another storage bus. It’s generally not recommended because that combination (enclosure, chipset, firmware, driver, port) is so variable from situation to situation but if you know how to address the pitfalls it can usually work fine.
You want ASMedia ASM1351 (heatsinked) or ASM235CM on the device side 🥹
This box has 4x ASM235CM and from the testing I’ve conducted over the last week it seems rock solid, so long as it’s not connected to the Ryzen’s built-in USB controller. It’s been flawless on the B350 chipset’s USB controller.
so long as it’s not connected to the Ryzen’s built-in USB controller.
Could you elaborate? Do you happen to have any theories as to why?
I have a zen 3 chip and while I’m not planning anything like this, I’m just curious.
I don’t know, but I’d guess the buffered chipset controller has more stability during certain power state transitions.
I think I’ve seen this hypothesis too and it makes sense to me.
If I’m building a new AMD system today, I’d look for a board that exposes more of the chipset-provided USB ports. Otherwise I’d budget for a high quality 4-port PCIe USB controller, if I’m planning to rely a lot on USB on that system.
This article provides some context. Now I do have the latest firmware which should have these fixes but they don’t seem to be foolproof. I’ve seen reports around the web that the firmware improves things but doesn’t completely eliminate them.
If you’ve seen devices disconnecting and reconnecting on occasion, it could be it.
Been on USB enclosures using Linux software raid for 20 years and never lost a bit so far.
Didn’t go cheap with USB jbod, and i have no idea if zfs is more sensitive to USB… But I don’t use zfs either so don’t know.
But again I have been using two jbods over USB:
- 4 SSDS split on two RAID1s on USB3
- 2 HDDs on RAID1 on USBC
All three raid are managed by Linux software raid stack.
The original one I think I started in the 2000’s, then upgraded disks many times and slowly moving to ssds to lower heat production and power usage.
Keep them COOL that’s important.
What an assertion - if you’re not using ZFS, how do you know you’ve “never lost a bit so far”?
I go to my disks and count my bits every morning, the total is always there, never lost one!
You must be reasonably decent at counting!
Yeah, you know there are only 10 types of people in the world: those who can count in binary and the others…
Found the bit counter
I’ve been on the USB train since 2019.
You’re exactly right, you gotta get devices with good USB-to-SATA chipsets, and you gotta keep them cool.
I’ve been using a mix of WD Elements, WD MyBook and StarTech/Vantec enclosures (ASM1351). I’ve had to cool all the chipsets on WD because they like bolt the PCBs straight to the drive so it heats up from it.
From all my testing I’ve discovered that:
- ASM1351 and ASM235CM are generally problem-free, but the former needs passive cooling if close to a disk. A small heatsink adhered with standard double-sided heat conductive tape is good enough.
- Host controllers matter too. Intel is generally problem-free. So is VIA. AMD has some issues on the CPU side on some models which are still not fully solved.
I like this box in particular because it uses a very straightforward design. It’s got 4x ASM235CM with cooling connected to a VIA hub. It’s got a built-in power supply, fan, it even comes with good cables. It fixes a lot of the system variables to known good values. You’re left with connecting it to a good USB host controller.
WD PCB on disk
I imagine if someone had to do it this way for whatever reason, Thunderbolt would be more reliable? Assuming it’s true Thunderbolt using a SATA bridge?
Yes - and unless you treat each enclosure as its own failure domain, it will still be a compromise, but it’s a lot better.
deleted by creator
What are you putting in them?
- 8x 8TB in a set of 2, some shucked WDs, some IronWolfs
- 5x 16TB in a set of 2, “recertified” WDs from serverpartdeals.com
Sweet, just backing up the family photos then?
And probably a bunch of Linux ISOs.
deleted by creator
Two machines. A main server/workstation and a small off-site backup machine that runs the same services but hass less compute and RAM.
You should take a look at getting sas enclosures. They’re pretty cheap, like $200 for a 16 bay. That will be so much more reliable.
I thought about it, but it typically requires extra PCIe cards that I can’t rely on as there’s no space in one of the machines and no PCIe slots in the other. That’s why I did a careful search till I stumbled upon this particular enclosure and then I tested one with ZFS for over a week before buying the rest.
That makes sense. I’m a lazy guy and I didn’t want to try testing so I went with a known quantity.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters PCIe Peripheral Component Interconnect Express SATA Serial AT Attachment interface for mass storage ZFS Solaris/Linux filesystem focusing on data integrity
3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.
[Thread #668 for this sub, first seen 9th Apr 2024, 13:55] [FAQ] [Full list] [Contact] [Source code]