• suokoOP
    link
    fedilink
    416 hours ago

    You still need an expensive hardware to run it. Unless myceliumwebserver project will start

    • @No_Ones_Slick_Like_Gaston
      link
      27 hours ago

      Correct. But what’s more expensive a single computing instance that’s local or cloud based credit eating SAS AI that does not produce significantly better results?

    • johant
      link
      fedilink
      4
      edit-2
      15 hours ago

      I’m testing 14B Qwen DeepSeek R1 through ollama and it’s impressive. I would think I could switch most of my current usage of chatgpt to this one (not alot I should admit though). Hardware is amd 7950x3d with nvidia 3070 ti. Not the cheapest hardware but not the most expensive either. It’s of course not as good as the full model on deepseek.com but I can run it truly locally, right now.

      • @[email protected]
        link
        fedilink
        111 hours ago

        How much vram does your TI pack? Is that the standard 8gb ddr6?

        I will because I’m surprised and impressed that a 14b model runs smoothly.

        Thanks for the insights!