• @TropicalDingdong
    link
    English
    510 months ago

    Its kind of funny, because right now, GPT4 doesn’t even achieve GPT4 levels of performance.

    Its a real issue with not having access to the underlying models/ how they were trained. We know they’ve repeatedly nerfed/ broken this model.

  • Lvxferre
    link
    fedilink
    English
    310 months ago

    I feel like a broken record, but…

    Seriously, the current large “language” models - or should I say, large syntax models? - are a technological dead end. They might find a lot of applications, but they certainly will not evolve to the “superhuman capabilities” from the tech bros’ wet dreams.

    In the best hypothesis, all that the self-instruction will do is to play whack-a-mole with hallucinations. In the worst it’ll degenerate.

    You’ll need a different architecture to go meaningfully past that. Probably one that doesn’t handle semantics as an afterthought, but instead as its own layer, a central and big one.