I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • @[email protected]
    link
    fedilink
    English
    43 months ago

    Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD

    • @TheBigBrotherOP
      link
      English
      -6
      edit-2
      3 months ago

      Would you suggest the K9 instead of the K8?