Hey all! This is my first post, so I’m sorry if anything is formatted incorrectly or if this is the wrong place to ask this. Recently I’ve saved up enough to upgrade my graphics card ($350 budget). I’ve heard great things about amd on linux and appreciate open source drivers so as to not be at the mercy of nvidia. My first choice of graphics card was a 6700xt, but then I heard that nvidia had significantly higher performance in terms of workstation tasks (not to mention the benefits of cuda and nvenc) and have been looking into a 3060 or 3060 ti. I do a bit of gaming in my free time, but its not my top priority, and I can almost guarantee that any option in this price range will be more than enough for the games I play. Ultimately my questions come down to:

  1. Would nvida or amd provide more raw performance on linux for my price range?
  2. Which would be better for productivity cuda encoding etc. (I mainly use blender, freecad, and solidworks, but I appreciate having extra features for any software that I may use in the future).
  3. What option would work best after a few years? (I’ve seen amd increase rheir performance with driver updates before, but the nvk driver also looks promising. I also host some servers and tend to cycle my componenta from my main system into my proxmox cluster).

Also a bit more details to hopefully help with any missing info: My current system is a Ryzen 7 3700x, gtx 1050 ti, 32gb ram, 850 watt psu, and nvme ssd. I’ve only ever used nvidia cards, but amd looks like a great alternative. As another side note, if there’s any way to run cuda apps on amd I plan on running my new gpu alongside my old one so nvenc is not too much of a concern.

Thanks in advance for any thoughts or ideas!

Edit 1: thanks so much for all of the feedback! I’m not going to purchase a gpu quite yet but probably in a few weeks. First I’ll be testing wayland with my 1050 ti and just researching how much I need each feature of each gpu. Thanks again for all of your feedback, I’ll update the post when I do order said gpu.

Edit 2: I made an interesting decision and actually got the arc a770. I’d be happy to discuss exactly why, and some of the pros and cons so far, but I do plan on eventually compiling a more in depth review somewhere sometime.

  • @cybersandwich
    link
    -11 year ago

    Nvidia is not nearly has bad on Linux as people say and Radeon isn’t nearly as great as people say.

    For your usecase I would 100% go with an Nvidia GPU. It will work so much better with Blender. It will also work better with other workstation tasks like video encoding and AI/ML. AMD’s open source driver doesn’t support the AMF encoder, so you’d have to use their proprietary driver (and lose the benefits of the open source one that everyone raves about) and ROCm is improving, but it’s so far behind CUDA it will end up holding you back for AI/ML compute tasks.

    • @[email protected]OP
      link
      fedilink
      21 year ago

      I’m new to ROCm and HIP, do you think that they’ll improve over time? Does amd have an existing implementation for any cuda software or must developers port stuff over to ROCm? I ask this because most of my cuda software already runs ok ish on my 1050 ti so if I went amd it may provide reasonable performance with possible ROCm development in the future. Also you mentioned ai/ml and I’d actually really like to give tensorflow a try at some point. At the moment It seems that each gpu has features that are in development (nvk vs ROCm), and whichever I go with it sounds like i’ll be crossing my fingers for each to mature at a reasonable time. At the moment I’m leaning nvida, because if nvk gains traction in a few years, It could provide a good open source alternative to switch to.

      • @cybersandwich
        link
        -11 year ago

        They will definitely improve over time–if only because it couldn’t possibly get worse. :)

        Joking aside, they’ve made significant improvements even over the last few months. Tensorflow has a variant that supports rocm now. PyTorch does as well. Both of those happened in the last 6 months or so.

        AMD says its prioritizing rocm (https://www.eetimes.com/rocm-is-amds-no-1-priority-exec-says/). But if you read the hackernews thread on that same article you’ll see quite a few complaints and some skepticism.

        The thing about CUDA is that it has over a decade of a headstart, and NVIDIA for all its warts has been actively supporting it for that entire time. So many things just work with nvidia and cuda that you’ll have to cobble together and cross your fingers with ROCm. There is an entire ecosystem built around CUDA, so there are tools, forums, guides, etc all a quick web search away. That doesnt exists (yet) for ROCm.

        To put it in perspective: I have a 6900xt (that I regretfully sold my 3070ti to buy). I spent a week just fighting with rocm to get it to work. It involved me editing some system files to trick it into thinking my Pop_os install was Ubuntu and carefully installed JUST the ROCm driver–since I still wanted to use the open source amd drivers for everything else. I finally got it working but NO libraries at the time supported it. So all of the online guides, tutorials, etc couldn’t be used. The documentation is horrendous imo.

        I actually got so annoyed I bought a used 1080ti to do the AI/ML work I needed to do. It took me 30 minutes to install it on a headless ubuntu server and get my code up and running. It’s been working without issue for 6 months.