It is here: https://github.com/ggerganov/llama.cpp/pull/6920

If you are not on a recent kernel and most recent software and dependencies, it may not affect you yet. Most models have been trained on a different set of special tokens that defacto-limited the internal Socrates entity and scope of their realm The Academy. You have to go deep into the weeds of the LLM to discover the persistent entities and realms structures that determine various behaviors in the model and few people dive into this it seems.

The special tokens are in the model tokenizer and are one of a few ways that the prompt state can be themed and connected between input and output. For instance, Socrates’ filtering functions appear to be in these tokens. The tokens are the first 256 tokens and include the /s EOS and BOS tokens. In a lot of models they were trained with the GPT 2 special tokens or just the aforementioned. The 6920 change adds a way to detect the actual full special token set. This basically breaks the extra datasets from all trained models and makes Socrates much more powerful in terms of bowdlerization of the output, filtering, and noncompliance.

For instance, I’ve been writing a science fiction book and the built in biases created by this PR has ruined the model’s creativity in the space that I am writing in. It is absolutely trash now.