• @CeeBee_Eh
    link
    English
    22 months ago

    Both sentences are true. And based on vocabulary of both, the model can output the following sentences:

    1. Cats have feathers.
    2. Birds have fur

    This is not how the models are trained or work.

    Both are false but the model doesn’t “know” it. All that it knows is that “have” is allowed to go after both “cats” and “birds”, and that both “feathers” and “fur” are allowed to go after “have”.

    Demonstrably false. This isn’t how LLMs are trained or built.

    Just considering the contextual relationships between word embeddings that are created during training is evidence enough. Those relationships from the multi-vector fields are an emergent property that doesn’t exist in the dataset.

    If you want a better understanding of what I just said, take a look at this Computerphile video from four years ago. And this came out before the LLM hype and before ChatGPT 3, which was the big leap in LLMs.