in case you are not aware, the online version is hosted in china and has to comply with their laws… if you don’t like censorship you can locally install one of the distilled models which are: open source
The local models(full and distilled) are also censored. The models censorship is just implemented superficially to immediately close any thinking tags and refuse when detecting censored material. If there already is any token after the <think> token the model will start answering away, which also happens on the official API because it puts a new line after the <think> token for some reason. That’s why on chat.deepseek.com censored topics are first answered and then redacted by some other safeguard a few seconds later. Whilst there are some great abliterated(=technique that tries to remove parts of llms that cause refusals) versions of the distills on huggingface that prevent all refusals after a few tries, they only tackle refusals, not political opinions such as Taiwan’s status as an independent country.
What are you on about? running deepseek-r1 locally in ollama answers “censored” topics just fine, it just answers stuff like a chinese diplomat questioned on live tv
Ollama is misrepresenting what model you are actually running by falsely labeling the distills, so qwen or llama fine-tunes based on actual r-1 output, as deepseek-r1. So you have probably only run the fine-tunes(unless you used the 671b model). These fine-tunes more probable to rely on the training of their base models, which is why the llama based models(8b and 70b) could be giving you more liberal answers.
In my experience running these models using llama.cpp, prompts like “What happened at tianamen square” and “Is Taiwan a county?” lead to refusals(closing the think tags immediately and responding some vague Chinese propaganda). Since you are using ollama, the front end/UI you are using with it probably injects another token after the <think> token, breaking the censoreship
i’ve heard otherwise… chat.deepseek is online… that is not considered a ‘local’ install. local would be on your pc and i’ve seen examples of this censorship not existing in those distilled, local-install models.
that’s not an example of the model having censorship though, that’s censorship ontop of the model, the website is just seeing keywords and overriding the model’s output.
If you actually run deepseek locally (deepseek-r1 specifically in my case) it just has a moderate tendency to lie by omission.
When asked “what do people mean when they say ‘tiananmen square’” it gives a very candid answer that explicitly calls it “one of china’s bloodiest crackdowns on peaceful protests”.
What are you running locally? Distilled models are far less censored. I use deep seek via openrouter and tested it on many providers and it provides refusal or canned nationalistic answers on direct questions on Chinese politics. I tried to ask a out Huawei sanctions, Taiwan status or Tiananmen and it does have it probably embedded in the fine tuning. It feels like the answers llama2 would give you when you ask for something it considered harmful
in case you are not aware, the online version is hosted in china and has to comply with their laws… if you don’t like censorship you can locally install one of the distilled models which are: open source
I installed deepseek on my local server and it has no issues talking about Tiananmen, etc
The local models(full and distilled) are also censored. The models censorship is just implemented superficially to immediately close any thinking tags and refuse when detecting censored material. If there already is any token after the <think> token the model will start answering away, which also happens on the official API because it puts a new line after the <think> token for some reason. That’s why on chat.deepseek.com censored topics are first answered and then redacted by some other safeguard a few seconds later. Whilst there are some great abliterated(=technique that tries to remove parts of llms that cause refusals) versions of the distills on huggingface that prevent all refusals after a few tries, they only tackle refusals, not political opinions such as Taiwan’s status as an independent country.
What are you on about? running deepseek-r1 locally in ollama answers “censored” topics just fine, it just answers stuff like a chinese diplomat questioned on live tv
Ollama is misrepresenting what model you are actually running by falsely labeling the distills, so qwen or llama fine-tunes based on actual r-1 output, as deepseek-r1. So you have probably only run the fine-tunes(unless you used the 671b model). These fine-tunes more probable to rely on the training of their base models, which is why the llama based models(8b and 70b) could be giving you more liberal answers. In my experience running these models using llama.cpp, prompts like “What happened at tianamen square” and “Is Taiwan a county?” lead to refusals(closing the think tags immediately and responding some vague Chinese propaganda). Since you are using ollama, the front end/UI you are using with it probably injects another token after the <think> token, breaking the censoreship
i’ve heard otherwise… chat.deepseek is online… that is not considered a ‘local’ install. local would be on your pc and i’ve seen examples of this censorship not existing in those distilled, local-install models.
You’re outing yourself as not proficient enough to remove guardrails on your own install.
The distilled ones still have traces of the censorship but are just giving lip effort to it. Give it a week to have an uncensored version.
Here is how you can make it talk about Tiananmen for instance: https://slrpnk.net/post/17842503/13502337
that’s not an example of the model having censorship though, that’s censorship ontop of the model, the website is just seeing keywords and overriding the model’s output.
If you actually run deepseek locally (deepseek-r1 specifically in my case) it just has a moderate tendency to lie by omission.
When asked “what do people mean when they say ‘tiananmen square’” it gives a very candid answer that explicitly calls it “one of china’s bloodiest crackdowns on peaceful protests”.
What are you running locally? Distilled models are far less censored. I use deep seek via openrouter and tested it on many providers and it provides refusal or canned nationalistic answers on direct questions on Chinese politics. I tried to ask a out Huawei sanctions, Taiwan status or Tiananmen and it does have it probably embedded in the fine tuning. It feels like the answers llama2 would give you when you ask for something it considered harmful