AMD gpus are just as good as Nvidia CPUs. Intel ones suck, but they’re in the market, too. Gaming market is irrelevant to Nvidia and AMD’s market cap and profits.
Nvidia’s main advantage is their proprietary CUDA software, which makes it so the majority of AI software only runs on Nvidia CPUs, and is incompatible with AMD or Intel CPUs.
Yes, AMD completely overslept here and their ROCm is much inferior. But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.
Even though I think they will play the same card like Microsoft obfuscating and making it very confusing to hinder the portability.
But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.
I don’t believe there’s anything stopping AMD from re-implementing the CUDA APIs; In fact, I’m pretty sure this is exactly what HIP is for, even though it’s not 100% automatic. AMD probably can’t link against the CUDA libraries like cuDNN and cuBLAS, but I don’t know that it would be useful to do that anyway since I’m fairly certain those libraries have GPU-specific optimizations. AMD makes their own replacements for them anyway.
IMO, the biggest annoyance with ROCm is that the consumer GPU support is very poor. On CUDA you can use any reasonably modern NVIDIA GPU and it will “just work.” This means if you’re a student, you have a reasonable chance of experimenting with compute libraries or even GPU programming if you have an NVIDIA card, but less so if you have an AMD card.
AMD gpus are just as good as Nvidia CPUs. Intel ones suck, but they’re in the market, too. Gaming market is irrelevant to Nvidia and AMD’s market cap and profits.
Nvidia’s main advantage is their proprietary CUDA software, which makes it so the majority of AI software only runs on Nvidia CPUs, and is incompatible with AMD or Intel CPUs.
Exactly: too many people confuse the monopoly aspect with the consumer gaming stuff, which isn’t even pocket change at this point.
CUDA and AI are the whales in the room, and nVidia has a stranglehold on that market and should be investigated.
(Even though, IMO, this is because AMD did their usual shitty job of software, and basically gave the market away.)
Yes, AMD completely overslept here and their ROCm is much inferior. But at least regulators can force NVIDIA to open their CUDA library and at least have some translation layers like ZLUDA.
Even though I think they will play the same card like Microsoft obfuscating and making it very confusing to hinder the portability.
I don’t believe there’s anything stopping AMD from re-implementing the CUDA APIs; In fact, I’m pretty sure this is exactly what HIP is for, even though it’s not 100% automatic. AMD probably can’t link against the CUDA libraries like cuDNN and cuBLAS, but I don’t know that it would be useful to do that anyway since I’m fairly certain those libraries have GPU-specific optimizations. AMD makes their own replacements for them anyway.
IMO, the biggest annoyance with ROCm is that the consumer GPU support is very poor. On CUDA you can use any reasonably modern NVIDIA GPU and it will “just work.” This means if you’re a student, you have a reasonable chance of experimenting with compute libraries or even GPU programming if you have an NVIDIA card, but less so if you have an AMD card.