Very informative read, thank you. Lets wait for Openb-R1 to be able for download, and use that time to check the machine’s code for bugs (likely, every larger software has them!), backdoors (can never be excluded as a possibility), and ways of further optimization.
I have to admit that their idea to “milk” DeepSeek-R1 for its own reasoning data is intriguing. I wonder how early in that training process the political bias has gotten its foot into the door. Or is this a late-stage filter?
just like for american models, the political bias is irrelevant. Realistically you are using the model for its reasoning capabilities, not its answer to “what happened in Tiananmen”.
I don’t agree with the point that an artificially induced bias is in any way irrelevant. And the tales of “reasoning” capabilities are quite overblown, imho.
Very informative read, thank you. Lets wait for Openb-R1 to be able for download, and use that time to check the machine’s code for bugs (likely, every larger software has them!), backdoors (can never be excluded as a possibility), and ways of further optimization.
I have to admit that their idea to “milk” DeepSeek-R1 for its own reasoning data is intriguing. I wonder how early in that training process the political bias has gotten its foot into the door. Or is this a late-stage filter?
just like for american models, the political bias is irrelevant. Realistically you are using the model for its reasoning capabilities, not its answer to “what happened in Tiananmen”.
I don’t agree with the point that an artificially induced bias is in any way irrelevant. And the tales of “reasoning” capabilities are quite overblown, imho.