• @BreadstickNinja
    link
    English
    15 hours ago

    There are finetunes of Llama, Qwen, etc., based on DeepSeek that implement the same pre-response thinking logic, but they are ultimately still the smaller models with some tuning. If you want to run locally and don’t have tens of thousands to throw at datacenter-scale GPUs, those are your best option, but they differ from what you’d get in the Deepseek app.