I’ve recently picked up an Intel P4000 and I’m purchasing some parts to set it up. Since it’s an older platform, I get that there are some limitations on what I can use, so I’m worried about buying things that aren’t compatible.

I’m interested in installing a Dell Boss N1 Monolithic to run Proxmox in RAID1, but have some concerns:

  • Will it even work with my system board? Maybe my search skills suck, but I can’t glean from the Internet how tightly controlled Server hardware ecosystems are. Would my mb even recognize a component like this, or the drives installed on it?

  • What drives work with it? According to the user manual, there are only three supported drives, and they have to be 480gb or 960gb in size. Had anyone tested using different NVMe M.2 drives?

  • @computergeek125
    link
    English
    4
    edit-2
    6 months ago

    Having used Dell BOSS S1 cards even in other Dell servers, there is firmware integration limitations to be seen. Even an R730 won’t fully tolerate an R740’s BOSS - those I’ve only seen work in the R740 and higher. The control interface for the S1 (and presumably S2) cards is integrated into a menu system in Dell’s BIOS and iDRAC.

    I know specifically for the BOSS-S1, there’s a Startech board that has a similar form factor and uses the same SATA RAID chipset, but without the Dell firmware. That Startech board works in non-Dell servers and workstations and has mostly the same features as the S1. (I 100% used my laptop’s eGPU to set them up a few times. It also definitely causes Windows to BSOD a few times because it doesn’t know how to eject an entire disk controller, but that’s also entirely my fault). The Startech controller has not really given me any major problems once I got it up, and has run in my R730s with near 100% uptime for a few months now.

    You may look to see if Startech has an NVMe version now to find a counterpart for the BOSS-N1 - I haven’t checked recently.

    Something else to consider is what your RAID array will actually be doing: M.2 SATA may be fast enough to be a boot disk, while your “real” data array uses SAS or NVMe to get to the CPU. You can even elect to use something fancier like Ceph or ZFS to handle the real data disks without a hardware RAID card. If you’re just booting the server hypervisor and maybe a low level agent VM or two and the real data is on another array, that Startech card may be for you. (You just need FreeDOS to re flash it to EFI mode)

    • @[email protected]OP
      link
      fedilink
      English
      26 months ago

      Thank you for your insights and suggestions. Very glad I didn’t buy anything yet. There appear to be a number of PCIe options for using both SATA or NVMe, but I’ve decided to get a couple of smaller, cheap 2.5 SATA SSDs just for the OS.

      The plan is to just have Proxmox installed RAID1 on the two SSDs for redundancy, then the real data array is 4x12tb HGST in some ZFS configuration. Does this seem sane?

  • @Ptsf
    link
    36 months ago

    From what I can tell this isn’t going to function, mainly just due to the additional proprietary connectors on that Dell card. I imagine there’s some sort of firmware integration as well though since it’s got a data line coming off it too. Quick side question, is there any reason you’re not just picking up some N95 boxes off Amazon? They’re so cheap you’d pay for them in power savings vs this box alone and you could essentially treat it as a Raid 1 of the entire system with a handful of backup scripts.

    • @[email protected]OP
      link
      fedilink
      English
      26 months ago

      Thanks for your time and consideration. I planned a few different builds with an N100, but everything priced out way more than I wanted to spend, even without drives. I saw this server for $80 on FB marketplace, and that was that.

      RAID1 is only for the Proxmox installation, them I will use ZFS for the block storage.