It’s interesting that they were able to get a model with 350M parameters to outperform others with 175B parameters

  • @[email protected]
    link
    fedilink
    21 year ago

    interesting indeed, even though it seems to work only on specific tasks. I definitely support this direction though. LLMs are getting out of hand (have actually been for a while now), slipped from researchers’ grasp into big tech companies’. I think the work that the open source and research community is doing already with the chatgpt lookalike models is incredible

  • @Bluetreefrog
    link
    11 year ago

    Aaaand, logic models come back around. Better dust off that old Prolog textbook.