I haven’t looked into running any if these models myself so I’m not too informed, but isn’t the censorship highly dependent on the training data? I assume they didn’t release theirs.
Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.
But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.
beginning to give a valid answer, then deleting the answer
If it IS open source someone could undo this, but I assume its more difficult than a single on/off button. That along with it being selfhostable, it might be pretty good. 🤔
Making the censorship blatantly obvious while simultaneously releasing the model as open source feels a bit like malicious compliance.
I haven’t looked into running any if these models myself so I’m not too informed, but isn’t the censorship highly dependent on the training data? I assume they didn’t release theirs.
Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.
But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.
If it IS open source someone could undo this, but I assume its more difficult than a single on/off button. That along with it being selfhostable, it might be pretty good. 🤔