• @[email protected]
    link
    fedilink
    English
    90
    edit-2
    5 months ago

    This comment is global-warming-denialism levels of stupid. I’m honestly shocked.

    LLM’s have no such implications for the field of linguistics. They’re barely relevant at all.

    Do I really need to point out that human beings do not learn language the way LLMs “learn” language? That human beings do not use language the way LLM’s use language? Or that human beings are not mathematical models. Not even approximately. I fucking hate this timeline.

    • @[email protected]
      link
      fedilink
      English
      41
      edit-2
      5 months ago

      Thank you for saying it. That really was a depressingly incurious comment.

      Chomsky, Ian Roberts and Jeffrey Watumull on the topic: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

      Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.

    • @mowrowow
      link
      English
      115 months ago

      I’m not saying this from a defending LLMs point, I genuinely think they are a waste of so many things in this world and I can’t wait for this hype cycle to be over.

      However, there is a lot of research and backing behind statistical learning in language acquisition, this is specifically the research subject of my friends. It’s a very big thing in intervention for delays in language.

      It is opposed to Chomsky’s innate language theory, which at this point I think almost any linguist or language/speech sciences researcher would tell you isn’t a well accepted theory (at least as a holistic explanation, certainly it could still be true to an extent and a part of other systems).

      tl;Dr LLMs are stupid, but it’s not broadly true that the way they “learn language” is entirely different from how humans do. The real difference is that they fail to actually learn anything even when imitating humans.

    • @[email protected]
      link
      fedilink
      English
      -15 months ago

      This would be true if chomskys claim was that he was simply studying human language acquisition and that machines are different, but his claim was that machines can’t learn human languages because they don’t have some intuitive innate grammar.

      Saying an llm hasn’t learned language becomes harder and harder the more you talk to it and the more it starts walking like a duck and quacking like a duck. To make that claim you’ll need some evidence to counter the demonstrable understanding the llm displays. Chomsky in his nytimes response just gives his own unprovable theories on innate grammar and some examples of questions llms “can’t answer” but if you actually ask any modern llm they answer them fine.

      You can define “learning” and “understanding” in a way that excludes llms but you’ll end up relying upon unprovable abstract theories until you can come up with an example of a question/prompt that any human would answer correctly and llms won’t to demonstrate that difference. I have yet to see any such examples. There’s plenty of evidence of them hallucinating when they reach the edge of their understanding, but that is something humans do as well.

      Chomsky is still a very important figure and his work on politics with manufacturing consent is just as relevant as when it was written over 20 years ago. His work on language though is on shaky grounds and llms have made it even shakier.

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        5 months ago

        Do you really think Chomsky’s UG hypothesis from half a century ago was formulated to deny that some dumb mathematical model would be able to simulate human speech?

        Almost nothing you’ve written has any grounding in empirical reality. You have a sentence that reads something like “the more you talk to LLM’s the harder it is to deny that they can use and understand language.”

        You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLM’s do not “understand” their outputs or their inputs. If we feed them nonsense, they output nonsense. There’s no underlying semantics whatsoever. An LLM is a mathematical model.

        I know it looks like magic, but it’s not actually magic. And even if it were, it would have nothing to do with linguistics, which is concerned with how humans, not computers, understand and manipulate language. This whole ridiculous conversation is a non-sequitur.

        • @TropicalDingdong
          link
          English
          -35 months ago

          You are like, science denial-ism level of ignorance when it comes to this conversation, or perhaps, it might be that you don’t actually understand the underlying philosophy of scientific inquiry to understand why LLMs basically were able to break the back of both UH and innate acquisition.

          You seem like the kind of person who cheer-leads “in the spirit of science” but doesn’t actually engage in it as a philosophical enterprise. You didn’t seem to notice the key point that @[email protected] made, which is that unlike UG, LLM’s are actually testable. That’s the whole thing right there, and if you don’t get the difference, that’s fine, but it speaks to your level of understanding to how one actually goes about conducting scientific inquiry.

          And if you want to talk about incurious:

          You might as well say that the longer you stare at a printed painting, the harder it is to deny that printers make art. LLM’s do not “understand” their outputs or their inputs. If we feed them nonsense, they output nonsense. There’s no underlying semantics whatsoever. LLM’s are a mathematical model.

          Specifically “There’s no underlying semantics whatsoever” is the key lynch pin that UG demands that LLM’s demonstrate are not strictly necessary. Its exactly why Chomskys house of cards crumbles with the the counter-factual to UG/ Innate acquisition that LLM’s offer. I had a chance to ask him this question directly about 6 months prior to that op-ed being published. And he gave a response that’s about as incurious about why LLM, and basically, big complex networks in general, are able to learn as you’ve offered here. His was response was basically the same regurgitation on UG and innate acquisition that he offers in the op-ed. And the key point is that yes, LLM’s are just a big bucket of linear algebra; but they represent an actually testable instrument for learning how a language might be learned. This is the most striking part of Chomskys response and I found it particularly galling.

          And it is interesting that yes, if you feed LLM’s transformers (I’m going to start using the right term here: transformers) unstructured garbage, you get unstructured garbage out. However, if there is something there to learn, they seem to be at least some what effective at finding it. But that occurs in non-language based systems as well, including image transformers, transformers being used to predict series data like temperature or stock prices, even even DNA and RNA sequences. We’re probably going to be having transformers capable of translating animal vocalizations like whale and dolphin songs. If you have structured series data, it seems like transformers are effective at learning patterns and generating coherent responses.

          Here’s the thing. Chomsky UG represented a monolith in the world of language, language acquisition and learning, and frankly, was an actual barrier to progress in the entire domain, because we now have a counter factual where learning occurs and neither UG or innate acquisition are necessary or at all relevant. Its a complete collapse of the ideas, but its about as close as we’ll get because at least in one case of language acquisition, they are completely irrelevant.

          And honestly, if you can’t handle criticism of ideas in the sciences, you don’t belong in the domain. Breaking other peoples ideas is fundamental the process, and its problematic when people assume you need some alternative in place to break someone elses work.

          • @[email protected]
            link
            fedilink
            English
            4
            edit-2
            5 months ago

            the key lynch pin that UG demands that LLM’s demonstrate are not strictly necessary

            You know what, I’m going to be patient. Let’s syllogize your argument so everyone can get on the same page, shall we.

            1. LLM’s have various properties.
            2. ???
            3. Therefore, the UG hypothesis is wrong.

            This argument is not valid, because it’s missing at least one premise. Once you come up with a valid argument, we can debate its premises. Until then, I can’t actually respond, because you haven’t said anything substantive.

            The mainstream opinion in linguistics is that LLM’s are mostly irrelevant. If you believe otherwise — for instance, that LLM’s can offer insight into some abstract UG hypothesis about developmental neurobiology — explain why, and maybe publish your theory for peer review.

            • @TropicalDingdong
              link
              English
              -5
              edit-2
              5 months ago

              You don’t need to put project a false argument onto what I was saying.

              Chomsky’s basic arguments:

              1: UG requires understanding the semantic roles of words and phrases to map syntactic structures onto semantic structures.

              2: UG posits certain principles of grammar are universal, and that syntactic and semantic representation is required as meaning changes with structure. The result is semantic universals - basic meanings that appear across all languages.

              3: Semantic bootstrapping is then invoked to explain where children using their understanding of semantic categorizes to learning syntactic structures of language.

              LLM’s torpedo all of this as totally unnecessary as fundamental to language acquisition, because they offer at least one example where none of the above need to be invoked. LLM’s have no innate understanding of language; its just pattern recognition and association. In UG semantics is intrinsically linked to syntactic structure. In this way, semantics are learned indirectly through exposure, rather than through an innate framework. LLM’s show that a UG and all of its complexity is totally unnecessary in at least one case of demonstrated language acquisition. That’s huge. Its beyond huge. It gives us a testable, falsifiable path forwards that UG didn’t.

              The mainstream opinion in linguistics is that LLM’s are mostly irrelevant.

              Largely, because Chomsky. To invoke Planck’s principle: Science advances one funeral at a time. Linguistics will finally be able to evolve past the rut its been in, and we now have real technical tools to do the kind of testable, reproducible, quantitative analysis at scale. We’re going to see more change in what we understand about language over the next five years than we’ve learned in the previous fifty. We didn’t have anything other than baby humans prior to now to study the properties of language acquisition. Language acquisition in humans is now a subset of the domain because we can actually talk about and study language acquisition outside of the context of humans. In a few more years, linguistics won’t look at-all like it did 4 years ago. If departments don’t adapt to this new paradigm, they’ll become like all those now laughable geography departments that didn’t adapt to the satellite revolution of the 1970s. Funny little backwaters of outdated modes of thinking the world has passed by. LLM’s for the study of language acquisition is like the invention of the microscope, and Chomsky completely missed the boat because it wasn’t his boat.

              • @[email protected]
                link
                fedilink
                English
                2
                edit-2
                5 months ago

                Your conclusion (which I assume is implied, since you didn’t bother to write it anywhere) might be something like,

                • Mathematical models built on enormous data sets do a good job of simulating human conversations (LLMs pass the Turing test)… THEREFORE, homo sapiens lack an innate capacity for language (i.e., the UG Hypothesis is fundamentally mistaken).

                My issue is that I just don’t see how to draw this conclusion from your premises. If you were to reformulate your premises into a valid argument structure, we can discuss them and find some common ground.

                • @TropicalDingdong
                  link
                  English
                  -35 months ago

                  You haven’t demonstrated that you have any real comprehension of the domain, or that you bring anything interesting enough to this conversation to warrant furtherance.

                  • @[email protected]
                    link
                    fedilink
                    English
                    1
                    edit-2
                    5 months ago

                    Harsh words for someone who can’t even state a valid argument. I mean do you expect me to guess how your conclusion comes from your unrelated premises?

                    1. Roses are red.
                    2. Violets are blue.
                    3. An LLM passed the Turing test.
                    4. Therefore, humans lack an innate language capacity.