I am planning to build a multipurpose home server. It will be a NAS, virtualization host, and have the typical selfhosted services. I want all of these services to have high uptime and be protected from power surges/balckouts, so I will put my server on a UPS.

I also want to run an LLM server on this machine, so I plan to add one or more GPUs and pass them through to a VM. I do not care about high uptime on the LLM server. However, this of course means that I will need a more powerful UPS, which I do not have the space for.

My plan is to get a second power supply to power only the GPUs. I do not want to put this PSU on the UPS. I will turn on the second PSU via an Add2PSU.

In the event of a blackout, this means that the base system will get full power and the GPUs will get power via the PCIe slot, but they will lose the power from the dedicated power plug.

Obviously this will slow down or kill the LLM server, but will this have an effect on the rest of the system?

  • @just_another_person
    link
    English
    -36 months ago

    You are mistaking “plug and play” with “hot swap/plug CAPABLE”. The spec allows for specifically designed hardware to come and go, like Express card, Thunderbolt, or USB4 lane-assigned devices, for example. That’s a feature built for a specific type of hardware to tolerate things like accepting current, or having a carrier chip at least communicating with the PCIE bridge that designates it’s current status. Almost all of these types of devices are not only designed for this, they are powered by the hardware they are plugged into, allowing that power to be negotiated and controlled by the bridge.

    NOT like a giant GPU that requires it’s own power supply current and ground.

    But hey, you read it on the Internet and seem to think it’s possible. Go ahead and try it out with your hardware and see what happens.

    • @Omgpwnies
      link
      English
      46 months ago

      Dude… you’re the one that said PCIE isn’t plug and play, which is incorrect. Plug and play simply means not having to manually assign IRQ/DMA/etc before using the peripheral, instead being handled automatically by the system/OS, as well as having peripherals identify themselves allowing the OS to automatically assign drivers. PCIE is fully plug-and-play compatible via ACPI, and hot swapping is supported by the protocol, if the peripheral also supports it.

      • @just_another_person
        link
        English
        -56 months ago

        Again…it is not. You can’t just go and unplug swap anything anywhere into a PCIE slot. The protocol supports it, but it is not by any definition any sort of live swappable by default.

        My speedometer says 200, but my car does not go that fast.

        An egg isn’t an omelet.

        The statement “humans can fly” is technically true, but not without a plane.

        A device that supports hot swap into a compatible and specifically configured slot could be though.

        I can keep going forever with this.

        • @Omgpwnies
          link
          English
          16 months ago

          Are you slow? nobody is arguing that you can hot swap a GPU. That’s not what people are correcting you on.

          YOU claimed that PCIE is not PLUG AND PLAY

          NO. PCIE is not plug and play.

          That was your comment. It was wrong. You were wrong.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      Right, it requires device support. And most GPUs won’t support it. But it’s by no means impossible.

      I’ve got some junk hardware at work, I’ll try next time I’m in and let you know.

      • @just_another_person
        link
        English
        -66 months ago

        Wow. Okay. Pull that card, buddy. Pedantic on the Internet. What a shocker. 🙄

    • @just_another_person
      link
      English
      -36 months ago

      You have multiple accounts, and are sadly so consumed with Internet points, you used both of them to downvote when you’re won’t. You’re pathetic. Get a hobby. Maybe learning about hardware!