• Alphane MoonOPM
    link
    English
    26 hours ago

    From my limited consumer-level perspective, Intel/AMD platforms aren’t mature enough. Try looking into any open-source/commercial ML software aimed at consumers, Nvidia is guaranteed and first class.

    The situation is arguably different in gaming.

    • @brucethemoose
      link
      4
      edit-2
      5 hours ago

      Intel is not as bad in LLM land as you’d think. Llama.cpp support gets better every day.

      Nvidia may be first class, but in this case, it doesn’t matter if the model you want doesn’t fit in VRAM. I’d trade my 3090 for a 48GB Arc card without even blinking, even if the setup is an absolute pain.

    • @anamethatisnt
      link
      25 hours ago

      Thanks for the insight. Kinda sad how selfhosted LLM or ML means Nvidia is a must have for the best experience.

      • @brucethemoose
        link
        25 hours ago

        Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.

        That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.