The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

  • @Just_Pizza_Crust
    link
    625 days ago

    So AI is only harmful when a person instructs it to do so?

    That sounds an awful lot like the “guns don’t kill people, people kill people”, argument.