I’m looking for a machine to run OpenGPT, Stable Diffusion, and Blender. I’m on the precipice of buying an Alienware w/ Ryzen 9 with a Radeon RX6850m. I’ve never needed anything near this level on Linux and I’m scared TBH. I’d much rather get a System76, but the equivalent hw has Nvidia and costs more than twice as much. While skimming for issues with current hardware, I saw something about a Legion laptop that could only use Intel RAID for the file system, and that this was a nightmare with generic distro kernels. What other stuff like this is happening with current laptop hardware?

I can barely manage a Gentoo install by following the handbook, understanding a third of it, and taking a few weeks to get sorted.

I spent all of yesterday afternoon sorting though all of the Linux hardware data in this stable diffusion telemetry: https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html

That total dataset has just over 5k total entries/699 valid Linux entries not including LSFW. It contains no entries for a Radeon RX6850m. I’m super nervous to buy a laptop that costs as much as my first car. I never want to run Windows again. What resources can I check to boost my confidence that this is going to work on Fedora WS?

If anyone is interested, the SD github dataset has the following numb entries/AMD card model:

- _3 RX 5700 XT /8GB
- _2 RX _580 __ /4GB
- _3 RX _580 __ /8GB
- 15 RX 6600 XT /8GB
- _1 RX 6650 XT /8GB
- 31 RX 6700 XT /12GB
- 10 RX 6750 XT /12GB
- 10 RX 6800 __ /16GB
- 19 RX 6800 XT /16GB
- 15 RX 6900 XT /16GB
- _9 RX 6950 XT /16GB
- _7 RX 7900 XT /20GB
- 39 RX 7900XTX /24GB
- _6 RX VEGA __ /8GB

Other common cards used in Linux and in this dataset are:

NVIDIA
- 39 A100-SXM4 /79GB
- 20 GTX-1070 /8GB
- 11 GTX-1080Ti /11GB
- 13 H100-PCIe /79GB
- 12 RTX-2070 /8GB
- 12 RTX-2080 Ti /22GB
- 31 RTX-3060 /12GB
- 16 RTX-3070 /8GB
- 10 RTX-3080 /10GB
- 39 RTX-3090 /24GB
- 11 RTX-3090 Ti /24GB
- 10 RTX-4070 Ti /12GB
- 87 RTX-4090 /24GB
- 27 RTX-A4000 /16GB
- 15 RTX-A5000 /24GB
TESLA
- 26 T4 /15GB
- 11 V100S-PCIE /32GB
  • poVoq
    link
    fedilink
    5
    edit-2
    1 year ago

    Uhm, I don’t think you will have much luck with an AMD laptop GPU and stable diffusion. Their support for desktop consumer GPUs is already atrocious in ROCm.

    Maybe get a cheaper laptop that allows connecting a eGPU case? No idea if that works better, but I think the chances are a lot better.

    • @j4k3OP
      link
      41 year ago

      I’ve seen people say this kind of thing. It is why I went to the data. There are 176 out of 699 that are using AMD just fine. Around 15 of those look to be laptops, but I can’t tell for sure.

      • poVoq
        link
        fedilink
        1
        edit-2
        1 year ago

        IDK 🤷‍♂️ But it also looks like the laptop GPU you propose has a maximum of 12gb ram, which is already quite low for the older image models and definitely not enough for most language models.

        • @j4k3OP
          link
          21 year ago

          This guy is windows centric but has a list of ram minimum requirements: https://www.youtube.com/watch?v=H-6DXU967bU&t=314

          or just the cropped screenshot:

          I’m mostly concerned with what potential proprietary garbage is locked in the firmware of a laptop. Current PC motherboards are even worse for this, like even System76’s firmware for their desktops is proprietary. The work on HIPS to bridge CUDA and ROCm is active and open source. I’ll deal with some limitations to avoid nvidia treating me like garbage as a customer. There is a good bit of banter about how AMD is currently operating at around half its potential and that this is about to change. With OpenSIL and the effort AMD is putting into open source it seems like the better option. I want to do some more kernel hacking experiments with the CPU scheduler and process isolation. This is far easier when I don’t have to deal with asymmetrical cores and complicated management.

          Regardless, the linked data telemetry shows plenty of people are running AMD GPUs just fine. I was expecting to see custom kernels used with the Radeon stuff and mostly people running mainline, but that is not the case. There are a few on the bleeding edge, but most are on old generic LTS kernels. It looks like it just works. The dataset includes the parameters and iteration time for each user running SD. It is a little slower than nvidia, but it still works fine. There are a lot of people running 8GB and smaller GPUs on SD.

          • poVoq
            link
            fedilink
            3
            edit-2
            1 year ago

            This is for very low resolution only and AI up-scaling then takes another long time. Yes SD can work with 8gb vRAM and 12 is nicer, but the upcoming SDXL will probably require 16gb to work good enough.

            I agree that Nvidia is crap and would love to recommend AMD, but their software for AI stuff is just bad right now and their business decisions to only support the newest data-center GPUs with it is even worse.

            I have an all AMD Linux system, and it works great for gaming and VR, but I have given up on trying to get SD to work on it despite spending a lot of time on that already. Maybe with a newer card it would be better, but I think the risk is just too high to spend a lot of money on an officially unsupported card that AMD can break any minute and has done so in the past.

            • @j4k3OP
              link
              11 year ago

              This is the talking-sense that got to me. Thanks. It is why I made the post before pulling the trigger.

              I really hate shopping and now I’m back to zero. I probably need to focus on an external graphics card solution, but that looks like a messy space to navigate too. There seems to be a good bit of negative feedback from the ASUS ROC external GPU laptop setup. I have no idea what is or is not possible. I think I saw a headline in passing about USB4 just getting merged into the kernel, so that doesn’t bode well for support of existing hardware. I’m not sure what kind of bandwidth is really needed for SD to the CPU.

              Thanks again for the minor disappointment to avoid a major one later.