• AtHeartEngineer
    link
    English
    77 days ago

    I haven’t seen a way to do that that doesn’t wreck the model

      • AtHeartEngineer
        link
        English
        67 days ago

        I know how to download and run models, what I’m saying is, all the “uncensored” deepseek models are abliterated and perform worse

          • AtHeartEngineer
            link
            English
            86 days ago

            I’m not talking about the speed I’m talking about the quality of output. I don’t think you understand how these models are transformed into “uncensored models” but a lot of the time using abliteration messed then up.

            • @[email protected]
              link
              fedilink
              English
              -36 days ago

              Buddy I have a running and been testing 7b and 14b compared to the cloud deepseek. Any sources, any evidence to back what you’re saying? Or just removed and complaining?

              • AtHeartEngineer
                link
                English
                26 days ago

                I’m not talking about the cloud version at all. I’m talking about the 32b and 14b models vs ones people have “uncensored”.

                I was hoping someone knew of an “uncensored” version of deepseek that was good, that could run locally, because I haven’t seen one.

                I don’t know what you mean by “removed”.

    • @474D
      link
      -17 days ago

      You can do it in LM Studio in like 5 clicks, I’m currently using it.

      • AtHeartEngineer
        link
        English
        47 days ago

        Running an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model

        • @474D
          link
          16 days ago

          I mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power

          • AtHeartEngineer
            link
            English
            26 days ago

            In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face