A long read, but fascinating.

  • @Zeth0s
    link
    English
    -11 year ago

    Sorry, but that article is completely non sense.

    LLMs are pretty clear in what they do. It’s true that they are often superficial, but most of this superficiality is due to creator trying to escape liabilities. ChatGTP is often evasive or superficial on purpose, because openai is trying to find a balance between usefulness and risk of being sued.

    LLMs do not try to be smart. They don’t do trick. They are built to give the best possible answer they are capable of doing (given how they are trained and built). Sometimes these answers are good, sometimes not, sometimes mixed.

    Why writing a whole article trying to demonstrate frauds in a tool. Is a washing machine a fraud because it tries to convince me clothes are clean? I am satisfied by the results, given it is a machine, my aunt complains that “washing by hand” is better.

    Same situation here, some people are happy, some would like more…

    • Flying SquidOP
      link
      English
      131 year ago

      This is about how people perceive LLMs, not the LLMs themselves.

      • @Zeth0s
        link
        English
        -7
        edit-2
        1 year ago

        The article imply a fraud in the LLMs… Compares it to psychs

        Non sense. They are just tools. One can like them or not

        • Flying SquidOP
          link
          English
          161 year ago

          No it doesn’t. It doesn’t claim they’re frauds. It claims that people are seeing things in them that aren’t there and giving them abilities they don’t have. I don’t think you read it all the way through.

          • @Zeth0s
            link
            English
            -3
            edit-2
            1 year ago

            I did, and I believe the author is the one using psych tricks. He tries to persuade readers to see things that are not there. Similar as he claims that the LLMs are doing, he prepares the scene by creating some comparison worded in a way to make them credible. But in practice none of those comparisons are true. They appear true because the author is good with words, and leave to the reader the message that LLMs are frauds. It is a well executed rhetoric exercise, but it is still non sensical. An LLM is just a model that doesn’t try to be intelligent, it tries to answer questions or, better, to complete text

            • Flying SquidOP
              link
              English
              71 year ago

              Again, it’s not about what the model tries to be. It’s about what people perceive it to be. I don’t know how you can say you read the whole article when you keep claiming it’s saying something it isn’t saying.

            • @[email protected]
              link
              fedilink
              English
              31 year ago

              If you read it all the way through, you didn’t comprehend any of it. Maybe work on that and try again.

        • FauxPseudo
          link
          English
          21 year ago

          You didn’t understand the assignment.