And DIY ML use cases (local LLM, video/audio upscaling, image generation).
Hope it will be possible to use two cards together for 48GB of VRAM 🤞.
But does it support Pytorch
Just have to install 5 different Python versions and somehow debug the tools to properly link them to each individual version.
If I understand correctly, these cards are less energy efficient than at least nVidia’s. Which makes me sad that nobody takes this into the account - both environmentally and price wise.
Less efficient than NVIDIA, but still more efficient than last gen.
They just entered the market, it will take some time to mature. The B580 is definitely a great contender when you can get it for MSRP. In my country in W Europe I’d pay €329 which is $344, so the benefit of a sharply priced alternative to a 4060 with 50% more VRAM is moot as its actually the same price or more for me.
It is a second gen product though, and it’s leaps ahead of first gen. Imagine where the next architecture could be. Meanwhile Nvidia relies heavily on making the chips gigantic to improve performance between generations.
Also these cards will greatly improve as the drivers mature, while Nvidia are already matured and have 25 years of baggage
I certainly hope that they improve, and become a serious competition, however, we saw that AMD is kinda stuck behind NVidia and Intel is anything but guaranteed to succeed, specially by seeing their power hungry CPU downfall.
It’s their second generation. You cannot possibly expect them to be competitive with the big players who are established for many years already. For what its worth, I think Intel is doing pretty decently here already.
Probably more efficient on an absolute basis. I believe 5090 will be 600 watts and 5080 will be 400 watts.
You should compare it to a model that’s in the same range, not those monsters.