@[email protected] to TechnologyEnglish • edit-23 days agoPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.aiexternal-linkmessage-square29fedilinkarrow-up1210arrow-down114cross-posted to: [email protected][email protected]
arrow-up1196arrow-down1external-linkPerplexity open sources R1 1776, a version of the DeepSeek R1 model that CEO Aravind Srinivas says has been “post-trained to remove the China censorship”.www.perplexity.ai@[email protected] to TechnologyEnglish • edit-23 days agomessage-square29fedilinkcross-posted to: [email protected][email protected]
minus-square@[email protected]linkfedilinkEnglish5•3 days agoListen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines. They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
Listen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines.
They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).
I’m talking about the example texts