• Paradox
    link
    fedilink
    English
    508 hours ago

    Can I download their model and run it on my own hardware? No? Then they’re inferior to deepseek

    • @Teanut
      link
      English
      207 hours ago

      In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.

      I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.

      • @brucethemoose
        link
        English
        10
        edit-2
        7 hours ago

        Qwen 2.5 is already amazing for a 14B, so I don’t see how deepseek can improve that much with a new base model, even if they continue train it.

        Perhaps we need to meet in the middle, and have quad channel APUs like Strix Halo become more common, and maybe release like 40-80GB MoE models. Perhaps bitnet ones?

        Or design them for asynchronous inference.

        I just don’t see how 20B-ish models can perform like one orders of magnitude bigger without a paradigm shift.

        • @Teanut
          link
          English
          57 hours ago

          nVidia’s new Digits workstation, while expensive from a consumer standpoint, should be a great tool for local inferencing research. $3000 for 128GB isn’t a crazy amount for a university or other researcher to spend, especially when you look at the price of the 5090.

  • @[email protected]
    link
    fedilink
    English
    118 hours ago

    I wonder how much this puff piece cost OpenAI? Pretty cheap compared to the damage of being caught with the hand in the proverbial cookie jar.

  • Tony Bark
    link
    fedilink
    English
    108 hours ago

    I’d like to see OpenAI compare themselves to other models aside from their own.