Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

  • @brucethemoose
    link
    English
    2
    edit-2
    30 days ago

    You have full control of your logit outputs with local LLMs, so theoretically you could “unscramble” them. And any finetuning would just blow that bias away anyway.

    OpenAI (IIRC) very notably stopped giving the logprobs of their models. They did this for many reasons, and most of them boil down to “profits” and “they are anticompetitive jerks,” but another reason is to enable watermark methods just like this.

    Also, thing about this is that basically no one uses self hosted LLMs compared to OpenAI (or really any API) LLM.