Article: https://proton.me/blog/deepseek

Calls it “Deepsneak”, failing to make it clear that the reason people love Deepseek is that you can download and it run it securely on any of your own private devices or servers - unlike most of the competing SOTA AIs.

I can’t speak for Proton, but the last couple weeks are showing some very clear biases coming out.

  • @[email protected]
    link
    fedilink
    198 hours ago

    To be fair, most people can’t actually self-host Deepseek, but there already are other providers offering API access to it.

    • @halcyoncmdr
      link
      English
      218 hours ago

      There are plenty of step-by-step guides to run Deepseek locally. Hell, someone even had it running on a Raspberry Pi. It seems to be much more efficient than other current alternatives.

      That’s about as openly available to self host as you can get without a 1-button installer.

      • @tekato
        link
        77 hours ago

        You can run an imitation of the DeepSeek R1 model, but not the actual one unless you literally buy a dozen of whatever NVIDIA’s top GPU is at the moment.

        • @[email protected]
          link
          fedilink
          66 hours ago

          A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost “only” ~$10k rather than 100k+…

      • @[email protected]
        link
        fedilink
        3
        edit-2
        7 hours ago

        Those are not deepseek R1. They are unrelated models like llama3 from Meta or Qwen from Alibaba “distilled” by deepseek.

        This is a common method to smarten a smaller model from a larger one.

        Ollama should have never labelled them deepseek:8B/32B. Way too many people misunderstood that.