• @RegalPotoo
    link
    English
    539 days ago

    Personally I can’t wait for a few good bankruptcies so I can pick up a couple of high end data centre GPUs for cents on the dollar

    • bruhduh
      link
      English
      24
      edit-2
      9 days ago

      Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it’s still competitive in llm field while p40 24gb is gddr5 so it’s good point is amount of memory for money it cost but it’s rather slow compared to p100 and compared to p100 it doesn’t support nvlink

      • @RegalPotoo
        link
        English
        59 days ago

        Thanks for the tips! I’m looking for something multi-purpose for LLM/stable diffusion messing about + transcoder for jellyfin - I’m guessing that there isn’t really a sweet spot for those 3. I don’t really have room or power budget for 2 cards, so I guess a P40 is probably the best bet?

        • bruhduh
          link
          English
          4
          edit-2
          9 days ago

          Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don’t have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember

          • @RegalPotoo
            link
            English
            19 days ago

            8700g

            Hah, I’ve pretty recently picked up an Epyc 7452, so not really looking for a new platform right now.

            The Arc cards are interesting, will keep those in mind

        • Justin
          link
          fedilink
          English
          29 days ago

          Intel a310 is the best $/perf transcoding card, but if P40 supports nvenc, it might work for both transcode and stable diffusion.

      • @[email protected]
        link
        fedilink
        English
        49 days ago

        Lowest price on Ebay for me is 290 Euro :/ The p100 are 200 each though.

        Do you happen to know if I could mix a 3700 with a p100?

        And thanks for the tips!

        • bruhduh
          link
          English
          29 days ago

          Ryzen 3700? Or rtx 3070? Please elaborate

            • bruhduh
              link
              English
              29 days ago

              I looked it up, rtx 3070 have nvlink capabilities though i wonder if all of them have it, so you can pair it if it have nvlink capabilities

      • Gormadt
        link
        fedilink
        English
        39 days ago

        Personally I don’t much for the LLM stuff, I’m more curious how they perform in Blender.

        • @utopiah
          link
          English
          39 days ago

          Interesting, I did try a bit of remote rendering on Blender (just to learn how to use via CLI) so that makes me wonder who is indeed scrapping the bottom of the barrel of “old” hardware and what they are using for. Maybe somebody is renting old GPUs for render farms, maybe other tasks, any pointer of such a trend?

      • @RegalPotoo
        link
        English
        19 days ago

        Digging into it a bit more, it seems like I might be better off getting a 12gb 3060 - similar price point, but much newer silicon

        • bruhduh
          link
          English
          1
          edit-2
          8 days ago

          It depends, if you want to run llm data center gpus are better, if you want to run general purpose tasks then newer silicon is better, in my case i prefer build to offload tasks, since I’m daily driving linux, my dream build is main gpu is amd rx 7600xt 16gb, Nvidia p40 for llms and ryzen 8700g 780m igpu for transcoding and light tasks, that way you’ll have your usual gaming home pc that also serves as a server in the background while being used