• @solrize
    link
    127 months ago

    How about improving ROCm itself? Is it still a big problem like before?

    • Domi
      link
      fedilink
      77 months ago

      I use ROCm for inference, both text generation via llama.cpp/LMStudio and image generation via ComfyUI.

      Works pretty much perfectly on a 6900 XT. Very fast and easy to setup.

      I had issues with some libraries only supporting CUDA when trying to train, but that was almost 6 months ago so things probably have improved in that area as well.