While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.
The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.
For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.
But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.
It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.
People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.
While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.
The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.
For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.
But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.
It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.
People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.