• @Teanut
    link
    English
    195 hours ago

    In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.

    I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.

    • @brucethemoose
      link
      English
      10
      edit-2
      5 hours ago

      Qwen 2.5 is already amazing for a 14B, so I don’t see how deepseek can improve that much with a new base model, even if they continue train it.

      Perhaps we need to meet in the middle, and have quad channel APUs like Strix Halo become more common, and maybe release like 40-80GB MoE models. Perhaps bitnet ones?

      Or design them for asynchronous inference.

      I just don’t see how 20B-ish models can perform like one orders of magnitude bigger without a paradigm shift.

      • @Teanut
        link
        English
        45 hours ago

        nVidia’s new Digits workstation, while expensive from a consumer standpoint, should be a great tool for local inferencing research. $3000 for 128GB isn’t a crazy amount for a university or other researcher to spend, especially when you look at the price of the 5090.