• @Voyajer
    link
    English
    8
    edit-2
    1 month ago

    I’ve run quantized 70B models on CPU with 32 gigs but it is very slow

    • bizarroland
      link
      fedilink
      31 month ago

      I have a home server with 140 gigs of RAM, it was surprisingly cheap. It’s an HP z6 with the 6146 gold xeon processor.

      I found a seller who was selling it with a low spec silver and 16 gigs of RAM for like 250 bucks.

      Found the processor upgrade for about $120 and spend another $150 on 128gb of second-hand ECC ddr4.

      I think the total cost was something like $700 after throwing a couple of 8 TB hard drives in.

      I’ve also placed a Nvidia 4070 in it, which I got doing some horse trading.

      How close am I on the specs to being able to run the 70b version?

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 month ago

        What’s the bus speed of the RAM? You might run it just fine but still bottlenecked there.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            1 month ago

            With 144Gb of total RAM, you should be able to run any CPU intensive software.

            The LLMs use GPU vRAM though, so it doesn’t matter how much system RAM you have, since GPU vRAM is what the xformers and tensor scripts prioritize and have been ultimately optimized to use over CPU and RAM.

    • sunzu
      link
      fedilink
      31 month ago

      I gonna add some RAM with hope I can split original 70b between GPU and RAM. 8b is great what it is as is

      Looks like it should be possible, not sure how much performance hit offloading to RAM will do. Fafo