I can’t say for sure- but, there is a good chance I might have a problem.
The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.
It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.
But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.
As a result, it now has more HDDs, and NVMes then I can count.
What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…
Figured I would share. Seeing a bunch of SSDs is always a pretty sight.
And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.
Hell yeah. I personally just switched over from a QNAP HDD-based NAS to a DIY NVMe-based NAS running Arch and ZFS and I couldn’t be happier with the performance. It is wild seeing ZFS scrubs happening at 1.3 GB/s. With NVMe prices coming down, I really think if you can afford the slightly extra cost per GB, it is so worth it in the long run.
I am using a Konyead 3001K which supports 4 NVMe slots on one card, but unfortunately my mobo only support x8x4x4, so I can only use 3 of them. Luckily my mobo has two on-board slots so I just used one of them to create a 4-drive raidz1.