I have currently a RX 6700XT and I’m quite happy with it when it comes to gaming and regular desktop usage, but was recently doing some local ML stuff and was just made aware of huge gap NVIDIA has over AMD in that space.

But yeah, going back to NVIDIA (I used to run 1080) after going AMD… seems kinda dirty for me ;-; Was very happy to move to AMD and be finally be free from the walled garden.

I thought at first to just buy a second GPU and still use my 6700XT for gaming and just use NVIDIA for ML, but unfortunately my motherboard doesn’t have 2 PCIe slots I could use for GPUs, so I need to choose. I would be able to buy used RTX 3090 for a fair price, since I don’t want to go for current gen, because of the current pricing.

So my question is how is NVIDIA nowadays? I specifically mean Wayland compatibility, since I just recently switched and would suck to go back to Xorg. Other than that, are there any hurdles, issues, annoyances, or is it smooth and seamless nowadays? Would you upgrade in my case?

EDIT: Forgot to mention, I’m currently using GNOME on Arch(btw), since that might be relevant

  • @[email protected]M
    link
    fedilink
    12
    edit-2
    8 months ago

    What sort of ML tasks exactly, and is it personal or professional?

    If it’s for LLMs you can just use Petals, which is a distributed service which doesn’t need your own GPU.

    If it’s for SD / image generation, there are four ways you can go about it. The first is to rent a GPU cloud service like vast.ai, runpod.io, vagon.io etc, then run SD on the PC you’re renting. It’s relatively cheap, generate as much as you want in the duration you’ve rented. Last I checked, the prices were something like ~0.33 USD per hour, which is a far cheaper option than buying a top-end nVidia card for casual workloads.

    The second option is using a website/service where the SD fronted is presented to you and you generate images through a credit system. Buy X amount of credits and you can generate X amount of images etc. Eg sites like Rundiffusion, dreamlike.art, seek.art, lexica etc.

    The third option is to go for a monthly/yearly subscription offering, where you can generate as much as you want, such as MidJourney, Dall-E etc. This can be cheaper than an pay-as-you go service if you’ve got a ton of stuff go generate. There’s also Adobe Firefly which is like a hybrid option (x credits / month).

    Finally, there are plenty of free Google collabs for SD. And there is also stable horde, uses distributed computing for SD. And there’s also an easy WebUI for it called ArtBot.

    So yeah, there’s plenty of options these days depending on what you want to do, you no longer need to actually own an nVidia card - and in fact for most users it’s the cheaper option. Like say you wanted to buy a 4090, which costs ~$2000. If you instead spent that on cloud services at say $20 p/m, you can get 8.3 years of usage - and most GPUs would become outdated in that time period and you’d have to buy a new one (whereas cloud GPUs continue to get better and for as-a-service models, you could get better GPUs at the same price). And I’m not even factoring in other expenses like power consumption, time spent on maintenance and troubleshooting etc. So for most people it’s a waste to buy a card just for ML, unless you’re going to be using it 24x7 and you’re actually making money off it.

    Edit: A used 3090 is going for ~$715-850 at the moment, which works out to an equivalent of ~3+ years of image generation via cloud services, assuming you’re going for paid subscription options. If you factor in the free options or casual pay-as-you-go systems, it can still work out a lot cheaper.

    • @[email protected]
      link
      fedilink
      6
      edit-2
      8 months ago

      Petals seems like a great project, barring the fact that your prompts can be looked at and altered by other members of the swarm.