Cat to TechnologyEnglish • edit-22 days agoPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiexternal-linkmessage-square29fedilinkarrow-up1209arrow-down114cross-posted to: [email protected][email protected]
arrow-up1195arrow-down1external-linkPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiCat to TechnologyEnglish • edit-22 days agomessage-square29fedilinkcross-posted to: [email protected][email protected]
minus-square@brucethemooselinkEnglish1•2 days agoIn the 32B range? I think we have plenty of uncensored thinking models there, maybe try fusion 32B. I’m not an expert though, as models trained from base Qwen have been sufficient for that, for me.
minus-square@[email protected]linkfedilinkEnglish1•2 days agoI just want to mess with this one too. I had a hard time finding an abliterated one before that didn’t fail the Tiananmen Square question regularly.
In the 32B range? I think we have plenty of uncensored thinking models there, maybe try fusion 32B.
I’m not an expert though, as models trained from base Qwen have been sufficient for that, for me.
I just want to mess with this one too. I had a hard time finding an abliterated one before that didn’t fail the Tiananmen Square question regularly.