• @modeler
    link
    English
    40
    edit-2
    1 month ago

    Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

    Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

    • @[email protected]
      link
      fedilink
      English
      201 month ago

      Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

      • @errer
        link
        English
        111 month ago

        Yeah uh let me just put in my 512GB ram stick…

        • bruhduh
          link
          English
          11 month ago

          https://www.ebay.com/p/116332559 lga2011 motherboards quite cheap, insert 2 xeon 2696v4 44 threads each totalling at 88 threads and 8 ddr4 32gb sticks, it comes quite cheap actually, you can also install Nvidia p40 with 24gb each, you can max out this build for ai for under 2000$

      • chiisana
        link
        fedilink
        English
        21 month ago

        Finally! My dumb dumb 1TB ram server (4x E5-4640 + 32x32GB DDR3 ECC) can shine.

    • @Siegfried
      link
      English
      8
      edit-2
      1 month ago

      At work we habe a small cluster totalling around 4TB of RAM

      It has 4 cooling units, a m3 of PSUs and it must take something like 30 m2 of space

    • TipRing
      link
      English
      41 month ago

      When the 8 bit quants hit, you could probably lease a 128GB system on runpod.

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      Can you run this in a distributed manner, like with kubernetes and lots of smaller machines?

    • @[email protected]
      link
      fedilink
      English
      21 month ago

      According to huggingface, you can run a 34B model using 22.4GBs of RAM max. That’s a RTX 3090 Ti.

    • arefx
      link
      fedilink
      English
      11 month ago

      Ypu mean my 4090 isn’t good enough 🤣😂

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      1 month ago

      Hmm, I probably have that much distributed across my network… maybe I should look into some way of distributing it across multiple gpu.

      Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.